Test Report: KVM_Linux_crio 18602

                    
                      f0f00e4b78df34cc802665249d4ea4180b698205:2024-05-05:34338
                    
                

Test fail (14/275)

x
+
TestAddons/parallel/Ingress (153.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-476078 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-476078 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-476078 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [ddd318b3-f460-41a7-8b57-def112b59f42] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [ddd318b3-f460-41a7-8b57-def112b59f42] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003976253s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-476078 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-476078 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.703302017s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-476078 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-476078 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.102
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-476078 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-476078 addons disable ingress-dns --alsologtostderr -v=1: (1.553681669s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-476078 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-476078 addons disable ingress --alsologtostderr -v=1: (8.065840287s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-476078 -n addons-476078
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-476078 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-476078 logs -n 25: (1.511771909s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-302864 | jenkins | v1.33.0 | 05 May 24 20:58 UTC |                     |
	|         | -p download-only-302864                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.0 | 05 May 24 20:58 UTC | 05 May 24 20:58 UTC |
	| delete  | -p download-only-302864                                                                     | download-only-302864 | jenkins | v1.33.0 | 05 May 24 20:58 UTC | 05 May 24 20:58 UTC |
	| delete  | -p download-only-583025                                                                     | download-only-583025 | jenkins | v1.33.0 | 05 May 24 20:58 UTC | 05 May 24 20:58 UTC |
	| delete  | -p download-only-302864                                                                     | download-only-302864 | jenkins | v1.33.0 | 05 May 24 20:58 UTC | 05 May 24 20:58 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-490333 | jenkins | v1.33.0 | 05 May 24 20:58 UTC |                     |
	|         | binary-mirror-490333                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:42709                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-490333                                                                     | binary-mirror-490333 | jenkins | v1.33.0 | 05 May 24 20:58 UTC | 05 May 24 20:58 UTC |
	| addons  | disable dashboard -p                                                                        | addons-476078        | jenkins | v1.33.0 | 05 May 24 20:58 UTC |                     |
	|         | addons-476078                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-476078        | jenkins | v1.33.0 | 05 May 24 20:58 UTC |                     |
	|         | addons-476078                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-476078 --wait=true                                                                | addons-476078        | jenkins | v1.33.0 | 05 May 24 20:58 UTC | 05 May 24 21:01 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-476078        | jenkins | v1.33.0 | 05 May 24 21:01 UTC | 05 May 24 21:01 UTC |
	|         | -p addons-476078                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-476078        | jenkins | v1.33.0 | 05 May 24 21:01 UTC | 05 May 24 21:01 UTC |
	|         | -p addons-476078                                                                            |                      |         |         |                     |                     |
	| ssh     | addons-476078 ssh cat                                                                       | addons-476078        | jenkins | v1.33.0 | 05 May 24 21:02 UTC | 05 May 24 21:02 UTC |
	|         | /opt/local-path-provisioner/pvc-cbe9cb1d-6e41-4e52-b663-b8efdb599694_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-476078 addons disable                                                                | addons-476078        | jenkins | v1.33.0 | 05 May 24 21:02 UTC | 05 May 24 21:02 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-476078 ip                                                                            | addons-476078        | jenkins | v1.33.0 | 05 May 24 21:02 UTC | 05 May 24 21:02 UTC |
	| addons  | addons-476078 addons disable                                                                | addons-476078        | jenkins | v1.33.0 | 05 May 24 21:02 UTC | 05 May 24 21:02 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-476078        | jenkins | v1.33.0 | 05 May 24 21:02 UTC | 05 May 24 21:02 UTC |
	|         | addons-476078                                                                               |                      |         |         |                     |                     |
	| addons  | addons-476078 addons disable                                                                | addons-476078        | jenkins | v1.33.0 | 05 May 24 21:02 UTC | 05 May 24 21:02 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-476078        | jenkins | v1.33.0 | 05 May 24 21:02 UTC | 05 May 24 21:02 UTC |
	|         | addons-476078                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-476078 ssh curl -s                                                                   | addons-476078        | jenkins | v1.33.0 | 05 May 24 21:02 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-476078 addons                                                                        | addons-476078        | jenkins | v1.33.0 | 05 May 24 21:02 UTC | 05 May 24 21:02 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-476078 addons                                                                        | addons-476078        | jenkins | v1.33.0 | 05 May 24 21:02 UTC | 05 May 24 21:02 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-476078 ip                                                                            | addons-476078        | jenkins | v1.33.0 | 05 May 24 21:04 UTC | 05 May 24 21:04 UTC |
	| addons  | addons-476078 addons disable                                                                | addons-476078        | jenkins | v1.33.0 | 05 May 24 21:04 UTC | 05 May 24 21:04 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-476078 addons disable                                                                | addons-476078        | jenkins | v1.33.0 | 05 May 24 21:04 UTC | 05 May 24 21:05 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/05 20:58:19
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0505 20:58:19.757317   19551 out.go:291] Setting OutFile to fd 1 ...
	I0505 20:58:19.757453   19551 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 20:58:19.757463   19551 out.go:304] Setting ErrFile to fd 2...
	I0505 20:58:19.757467   19551 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 20:58:19.757680   19551 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18602-11466/.minikube/bin
	I0505 20:58:19.758284   19551 out.go:298] Setting JSON to false
	I0505 20:58:19.759112   19551 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2447,"bootTime":1714940253,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0505 20:58:19.759168   19551 start.go:139] virtualization: kvm guest
	I0505 20:58:19.761428   19551 out.go:177] * [addons-476078] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0505 20:58:19.762736   19551 notify.go:220] Checking for updates...
	I0505 20:58:19.762748   19551 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 20:58:19.764237   19551 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 20:58:19.765796   19551 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18602-11466/kubeconfig
	I0505 20:58:19.767345   19551 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18602-11466/.minikube
	I0505 20:58:19.768946   19551 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0505 20:58:19.770404   19551 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 20:58:19.771876   19551 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 20:58:19.802911   19551 out.go:177] * Using the kvm2 driver based on user configuration
	I0505 20:58:19.804366   19551 start.go:297] selected driver: kvm2
	I0505 20:58:19.804387   19551 start.go:901] validating driver "kvm2" against <nil>
	I0505 20:58:19.804401   19551 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 20:58:19.805044   19551 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 20:58:19.805118   19551 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18602-11466/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0505 20:58:19.818711   19551 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0505 20:58:19.818757   19551 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0505 20:58:19.818950   19551 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0505 20:58:19.819014   19551 cni.go:84] Creating CNI manager for ""
	I0505 20:58:19.819033   19551 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0505 20:58:19.819046   19551 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0505 20:58:19.819120   19551 start.go:340] cluster config:
	{Name:addons-476078 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-476078 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 20:58:19.819221   19551 iso.go:125] acquiring lock: {Name:mk05a37c5afd8d748706c016c1665e3846f7161e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 20:58:19.821003   19551 out.go:177] * Starting "addons-476078" primary control-plane node in "addons-476078" cluster
	I0505 20:58:19.822273   19551 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0505 20:58:19.822310   19551 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18602-11466/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0505 20:58:19.822323   19551 cache.go:56] Caching tarball of preloaded images
	I0505 20:58:19.822397   19551 preload.go:173] Found /home/jenkins/minikube-integration/18602-11466/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0505 20:58:19.822410   19551 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0505 20:58:19.822703   19551 profile.go:143] Saving config to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/config.json ...
	I0505 20:58:19.822735   19551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/config.json: {Name:mkbb67ee823096213b7c142e1c0e129bcf056988 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 20:58:19.822865   19551 start.go:360] acquireMachinesLock for addons-476078: {Name:mk7f653ea73351d572a4896c6b37d2e7aa44b4ac Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 20:58:19.822925   19551 start.go:364] duration metric: took 43.716µs to acquireMachinesLock for "addons-476078"
	I0505 20:58:19.822948   19551 start.go:93] Provisioning new machine with config: &{Name:addons-476078 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:addons-476078 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0505 20:58:19.823025   19551 start.go:125] createHost starting for "" (driver="kvm2")
	I0505 20:58:19.825229   19551 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0505 20:58:19.825363   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:58:19.825413   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:58:19.838808   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45501
	I0505 20:58:19.839168   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:58:19.839681   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:58:19.839710   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:58:19.840023   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:58:19.840183   19551 main.go:141] libmachine: (addons-476078) Calling .GetMachineName
	I0505 20:58:19.840328   19551 main.go:141] libmachine: (addons-476078) Calling .DriverName
	I0505 20:58:19.840462   19551 start.go:159] libmachine.API.Create for "addons-476078" (driver="kvm2")
	I0505 20:58:19.840493   19551 client.go:168] LocalClient.Create starting
	I0505 20:58:19.840550   19551 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem
	I0505 20:58:19.888731   19551 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem
	I0505 20:58:20.060555   19551 main.go:141] libmachine: Running pre-create checks...
	I0505 20:58:20.060581   19551 main.go:141] libmachine: (addons-476078) Calling .PreCreateCheck
	I0505 20:58:20.061101   19551 main.go:141] libmachine: (addons-476078) Calling .GetConfigRaw
	I0505 20:58:20.061496   19551 main.go:141] libmachine: Creating machine...
	I0505 20:58:20.061513   19551 main.go:141] libmachine: (addons-476078) Calling .Create
	I0505 20:58:20.061654   19551 main.go:141] libmachine: (addons-476078) Creating KVM machine...
	I0505 20:58:20.062885   19551 main.go:141] libmachine: (addons-476078) DBG | found existing default KVM network
	I0505 20:58:20.063579   19551 main.go:141] libmachine: (addons-476078) DBG | I0505 20:58:20.063404   19573 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0505 20:58:20.063613   19551 main.go:141] libmachine: (addons-476078) DBG | created network xml: 
	I0505 20:58:20.063639   19551 main.go:141] libmachine: (addons-476078) DBG | <network>
	I0505 20:58:20.063650   19551 main.go:141] libmachine: (addons-476078) DBG |   <name>mk-addons-476078</name>
	I0505 20:58:20.063659   19551 main.go:141] libmachine: (addons-476078) DBG |   <dns enable='no'/>
	I0505 20:58:20.063666   19551 main.go:141] libmachine: (addons-476078) DBG |   
	I0505 20:58:20.063675   19551 main.go:141] libmachine: (addons-476078) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0505 20:58:20.063690   19551 main.go:141] libmachine: (addons-476078) DBG |     <dhcp>
	I0505 20:58:20.063721   19551 main.go:141] libmachine: (addons-476078) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0505 20:58:20.063744   19551 main.go:141] libmachine: (addons-476078) DBG |     </dhcp>
	I0505 20:58:20.063755   19551 main.go:141] libmachine: (addons-476078) DBG |   </ip>
	I0505 20:58:20.063766   19551 main.go:141] libmachine: (addons-476078) DBG |   
	I0505 20:58:20.063779   19551 main.go:141] libmachine: (addons-476078) DBG | </network>
	I0505 20:58:20.063787   19551 main.go:141] libmachine: (addons-476078) DBG | 
	I0505 20:58:20.069044   19551 main.go:141] libmachine: (addons-476078) DBG | trying to create private KVM network mk-addons-476078 192.168.39.0/24...
	I0505 20:58:20.133558   19551 main.go:141] libmachine: (addons-476078) DBG | private KVM network mk-addons-476078 192.168.39.0/24 created
	I0505 20:58:20.133583   19551 main.go:141] libmachine: (addons-476078) Setting up store path in /home/jenkins/minikube-integration/18602-11466/.minikube/machines/addons-476078 ...
	I0505 20:58:20.133615   19551 main.go:141] libmachine: (addons-476078) DBG | I0505 20:58:20.133521   19573 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18602-11466/.minikube
	I0505 20:58:20.133637   19551 main.go:141] libmachine: (addons-476078) Building disk image from file:///home/jenkins/minikube-integration/18602-11466/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso
	I0505 20:58:20.133656   19551 main.go:141] libmachine: (addons-476078) Downloading /home/jenkins/minikube-integration/18602-11466/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18602-11466/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso...
	I0505 20:58:20.373568   19551 main.go:141] libmachine: (addons-476078) DBG | I0505 20:58:20.373369   19573 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/addons-476078/id_rsa...
	I0505 20:58:20.505595   19551 main.go:141] libmachine: (addons-476078) DBG | I0505 20:58:20.505434   19573 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/addons-476078/addons-476078.rawdisk...
	I0505 20:58:20.505635   19551 main.go:141] libmachine: (addons-476078) DBG | Writing magic tar header
	I0505 20:58:20.505653   19551 main.go:141] libmachine: (addons-476078) DBG | Writing SSH key tar header
	I0505 20:58:20.505665   19551 main.go:141] libmachine: (addons-476078) DBG | I0505 20:58:20.505590   19573 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18602-11466/.minikube/machines/addons-476078 ...
	I0505 20:58:20.505765   19551 main.go:141] libmachine: (addons-476078) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/addons-476078
	I0505 20:58:20.505785   19551 main.go:141] libmachine: (addons-476078) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18602-11466/.minikube/machines
	I0505 20:58:20.505809   19551 main.go:141] libmachine: (addons-476078) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18602-11466/.minikube
	I0505 20:58:20.505824   19551 main.go:141] libmachine: (addons-476078) Setting executable bit set on /home/jenkins/minikube-integration/18602-11466/.minikube/machines/addons-476078 (perms=drwx------)
	I0505 20:58:20.505833   19551 main.go:141] libmachine: (addons-476078) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18602-11466
	I0505 20:58:20.505845   19551 main.go:141] libmachine: (addons-476078) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0505 20:58:20.505852   19551 main.go:141] libmachine: (addons-476078) DBG | Checking permissions on dir: /home/jenkins
	I0505 20:58:20.505860   19551 main.go:141] libmachine: (addons-476078) DBG | Checking permissions on dir: /home
	I0505 20:58:20.505867   19551 main.go:141] libmachine: (addons-476078) DBG | Skipping /home - not owner
	I0505 20:58:20.505886   19551 main.go:141] libmachine: (addons-476078) Setting executable bit set on /home/jenkins/minikube-integration/18602-11466/.minikube/machines (perms=drwxr-xr-x)
	I0505 20:58:20.505897   19551 main.go:141] libmachine: (addons-476078) Setting executable bit set on /home/jenkins/minikube-integration/18602-11466/.minikube (perms=drwxr-xr-x)
	I0505 20:58:20.505949   19551 main.go:141] libmachine: (addons-476078) Setting executable bit set on /home/jenkins/minikube-integration/18602-11466 (perms=drwxrwxr-x)
	I0505 20:58:20.505996   19551 main.go:141] libmachine: (addons-476078) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0505 20:58:20.506019   19551 main.go:141] libmachine: (addons-476078) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0505 20:58:20.506037   19551 main.go:141] libmachine: (addons-476078) Creating domain...
	I0505 20:58:20.507040   19551 main.go:141] libmachine: (addons-476078) define libvirt domain using xml: 
	I0505 20:58:20.507069   19551 main.go:141] libmachine: (addons-476078) <domain type='kvm'>
	I0505 20:58:20.507081   19551 main.go:141] libmachine: (addons-476078)   <name>addons-476078</name>
	I0505 20:58:20.507093   19551 main.go:141] libmachine: (addons-476078)   <memory unit='MiB'>4000</memory>
	I0505 20:58:20.507103   19551 main.go:141] libmachine: (addons-476078)   <vcpu>2</vcpu>
	I0505 20:58:20.507114   19551 main.go:141] libmachine: (addons-476078)   <features>
	I0505 20:58:20.507123   19551 main.go:141] libmachine: (addons-476078)     <acpi/>
	I0505 20:58:20.507133   19551 main.go:141] libmachine: (addons-476078)     <apic/>
	I0505 20:58:20.507142   19551 main.go:141] libmachine: (addons-476078)     <pae/>
	I0505 20:58:20.507152   19551 main.go:141] libmachine: (addons-476078)     
	I0505 20:58:20.507161   19551 main.go:141] libmachine: (addons-476078)   </features>
	I0505 20:58:20.507177   19551 main.go:141] libmachine: (addons-476078)   <cpu mode='host-passthrough'>
	I0505 20:58:20.507188   19551 main.go:141] libmachine: (addons-476078)   
	I0505 20:58:20.507203   19551 main.go:141] libmachine: (addons-476078)   </cpu>
	I0505 20:58:20.507216   19551 main.go:141] libmachine: (addons-476078)   <os>
	I0505 20:58:20.507225   19551 main.go:141] libmachine: (addons-476078)     <type>hvm</type>
	I0505 20:58:20.507236   19551 main.go:141] libmachine: (addons-476078)     <boot dev='cdrom'/>
	I0505 20:58:20.507244   19551 main.go:141] libmachine: (addons-476078)     <boot dev='hd'/>
	I0505 20:58:20.507267   19551 main.go:141] libmachine: (addons-476078)     <bootmenu enable='no'/>
	I0505 20:58:20.507292   19551 main.go:141] libmachine: (addons-476078)   </os>
	I0505 20:58:20.507298   19551 main.go:141] libmachine: (addons-476078)   <devices>
	I0505 20:58:20.507308   19551 main.go:141] libmachine: (addons-476078)     <disk type='file' device='cdrom'>
	I0505 20:58:20.507329   19551 main.go:141] libmachine: (addons-476078)       <source file='/home/jenkins/minikube-integration/18602-11466/.minikube/machines/addons-476078/boot2docker.iso'/>
	I0505 20:58:20.507338   19551 main.go:141] libmachine: (addons-476078)       <target dev='hdc' bus='scsi'/>
	I0505 20:58:20.507365   19551 main.go:141] libmachine: (addons-476078)       <readonly/>
	I0505 20:58:20.507382   19551 main.go:141] libmachine: (addons-476078)     </disk>
	I0505 20:58:20.507396   19551 main.go:141] libmachine: (addons-476078)     <disk type='file' device='disk'>
	I0505 20:58:20.507410   19551 main.go:141] libmachine: (addons-476078)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0505 20:58:20.507431   19551 main.go:141] libmachine: (addons-476078)       <source file='/home/jenkins/minikube-integration/18602-11466/.minikube/machines/addons-476078/addons-476078.rawdisk'/>
	I0505 20:58:20.507442   19551 main.go:141] libmachine: (addons-476078)       <target dev='hda' bus='virtio'/>
	I0505 20:58:20.507449   19551 main.go:141] libmachine: (addons-476078)     </disk>
	I0505 20:58:20.507457   19551 main.go:141] libmachine: (addons-476078)     <interface type='network'>
	I0505 20:58:20.507464   19551 main.go:141] libmachine: (addons-476078)       <source network='mk-addons-476078'/>
	I0505 20:58:20.507471   19551 main.go:141] libmachine: (addons-476078)       <model type='virtio'/>
	I0505 20:58:20.507493   19551 main.go:141] libmachine: (addons-476078)     </interface>
	I0505 20:58:20.507503   19551 main.go:141] libmachine: (addons-476078)     <interface type='network'>
	I0505 20:58:20.507509   19551 main.go:141] libmachine: (addons-476078)       <source network='default'/>
	I0505 20:58:20.507517   19551 main.go:141] libmachine: (addons-476078)       <model type='virtio'/>
	I0505 20:58:20.507523   19551 main.go:141] libmachine: (addons-476078)     </interface>
	I0505 20:58:20.507535   19551 main.go:141] libmachine: (addons-476078)     <serial type='pty'>
	I0505 20:58:20.507542   19551 main.go:141] libmachine: (addons-476078)       <target port='0'/>
	I0505 20:58:20.507549   19551 main.go:141] libmachine: (addons-476078)     </serial>
	I0505 20:58:20.507570   19551 main.go:141] libmachine: (addons-476078)     <console type='pty'>
	I0505 20:58:20.507587   19551 main.go:141] libmachine: (addons-476078)       <target type='serial' port='0'/>
	I0505 20:58:20.507596   19551 main.go:141] libmachine: (addons-476078)     </console>
	I0505 20:58:20.507602   19551 main.go:141] libmachine: (addons-476078)     <rng model='virtio'>
	I0505 20:58:20.507608   19551 main.go:141] libmachine: (addons-476078)       <backend model='random'>/dev/random</backend>
	I0505 20:58:20.507616   19551 main.go:141] libmachine: (addons-476078)     </rng>
	I0505 20:58:20.507621   19551 main.go:141] libmachine: (addons-476078)     
	I0505 20:58:20.507635   19551 main.go:141] libmachine: (addons-476078)     
	I0505 20:58:20.507644   19551 main.go:141] libmachine: (addons-476078)   </devices>
	I0505 20:58:20.507652   19551 main.go:141] libmachine: (addons-476078) </domain>
	I0505 20:58:20.507659   19551 main.go:141] libmachine: (addons-476078) 
	I0505 20:58:20.513326   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:40:b2:3f in network default
	I0505 20:58:20.513914   19551 main.go:141] libmachine: (addons-476078) Ensuring networks are active...
	I0505 20:58:20.513932   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:20.514577   19551 main.go:141] libmachine: (addons-476078) Ensuring network default is active
	I0505 20:58:20.514835   19551 main.go:141] libmachine: (addons-476078) Ensuring network mk-addons-476078 is active
	I0505 20:58:20.515297   19551 main.go:141] libmachine: (addons-476078) Getting domain xml...
	I0505 20:58:20.515903   19551 main.go:141] libmachine: (addons-476078) Creating domain...
	I0505 20:58:21.894308   19551 main.go:141] libmachine: (addons-476078) Waiting to get IP...
	I0505 20:58:21.895004   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:21.895495   19551 main.go:141] libmachine: (addons-476078) DBG | unable to find current IP address of domain addons-476078 in network mk-addons-476078
	I0505 20:58:21.895525   19551 main.go:141] libmachine: (addons-476078) DBG | I0505 20:58:21.895455   19573 retry.go:31] will retry after 294.594849ms: waiting for machine to come up
	I0505 20:58:22.192385   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:22.192942   19551 main.go:141] libmachine: (addons-476078) DBG | unable to find current IP address of domain addons-476078 in network mk-addons-476078
	I0505 20:58:22.192971   19551 main.go:141] libmachine: (addons-476078) DBG | I0505 20:58:22.192888   19573 retry.go:31] will retry after 342.366044ms: waiting for machine to come up
	I0505 20:58:22.536486   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:22.536948   19551 main.go:141] libmachine: (addons-476078) DBG | unable to find current IP address of domain addons-476078 in network mk-addons-476078
	I0505 20:58:22.536978   19551 main.go:141] libmachine: (addons-476078) DBG | I0505 20:58:22.536901   19573 retry.go:31] will retry after 462.108476ms: waiting for machine to come up
	I0505 20:58:23.000473   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:23.000925   19551 main.go:141] libmachine: (addons-476078) DBG | unable to find current IP address of domain addons-476078 in network mk-addons-476078
	I0505 20:58:23.000955   19551 main.go:141] libmachine: (addons-476078) DBG | I0505 20:58:23.000868   19573 retry.go:31] will retry after 531.892809ms: waiting for machine to come up
	I0505 20:58:23.534681   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:23.535139   19551 main.go:141] libmachine: (addons-476078) DBG | unable to find current IP address of domain addons-476078 in network mk-addons-476078
	I0505 20:58:23.535165   19551 main.go:141] libmachine: (addons-476078) DBG | I0505 20:58:23.535106   19573 retry.go:31] will retry after 483.047428ms: waiting for machine to come up
	I0505 20:58:24.019852   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:24.020332   19551 main.go:141] libmachine: (addons-476078) DBG | unable to find current IP address of domain addons-476078 in network mk-addons-476078
	I0505 20:58:24.020370   19551 main.go:141] libmachine: (addons-476078) DBG | I0505 20:58:24.020240   19573 retry.go:31] will retry after 707.426774ms: waiting for machine to come up
	I0505 20:58:24.730699   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:24.731059   19551 main.go:141] libmachine: (addons-476078) DBG | unable to find current IP address of domain addons-476078 in network mk-addons-476078
	I0505 20:58:24.731084   19551 main.go:141] libmachine: (addons-476078) DBG | I0505 20:58:24.731007   19573 retry.go:31] will retry after 832.935037ms: waiting for machine to come up
	I0505 20:58:25.565836   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:25.566268   19551 main.go:141] libmachine: (addons-476078) DBG | unable to find current IP address of domain addons-476078 in network mk-addons-476078
	I0505 20:58:25.566297   19551 main.go:141] libmachine: (addons-476078) DBG | I0505 20:58:25.566222   19573 retry.go:31] will retry after 1.413947965s: waiting for machine to come up
	I0505 20:58:26.981758   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:26.982232   19551 main.go:141] libmachine: (addons-476078) DBG | unable to find current IP address of domain addons-476078 in network mk-addons-476078
	I0505 20:58:26.982258   19551 main.go:141] libmachine: (addons-476078) DBG | I0505 20:58:26.982185   19573 retry.go:31] will retry after 1.825001378s: waiting for machine to come up
	I0505 20:58:28.809609   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:28.810255   19551 main.go:141] libmachine: (addons-476078) DBG | unable to find current IP address of domain addons-476078 in network mk-addons-476078
	I0505 20:58:28.810285   19551 main.go:141] libmachine: (addons-476078) DBG | I0505 20:58:28.810215   19573 retry.go:31] will retry after 1.881229823s: waiting for machine to come up
	I0505 20:58:30.693320   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:30.693813   19551 main.go:141] libmachine: (addons-476078) DBG | unable to find current IP address of domain addons-476078 in network mk-addons-476078
	I0505 20:58:30.693844   19551 main.go:141] libmachine: (addons-476078) DBG | I0505 20:58:30.693750   19573 retry.go:31] will retry after 2.591326187s: waiting for machine to come up
	I0505 20:58:33.286251   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:33.286563   19551 main.go:141] libmachine: (addons-476078) DBG | unable to find current IP address of domain addons-476078 in network mk-addons-476078
	I0505 20:58:33.286593   19551 main.go:141] libmachine: (addons-476078) DBG | I0505 20:58:33.286505   19573 retry.go:31] will retry after 3.368249883s: waiting for machine to come up
	I0505 20:58:36.657463   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:36.657799   19551 main.go:141] libmachine: (addons-476078) DBG | unable to find current IP address of domain addons-476078 in network mk-addons-476078
	I0505 20:58:36.657821   19551 main.go:141] libmachine: (addons-476078) DBG | I0505 20:58:36.657745   19573 retry.go:31] will retry after 4.19015471s: waiting for machine to come up
	I0505 20:58:40.850037   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:40.850494   19551 main.go:141] libmachine: (addons-476078) DBG | unable to find current IP address of domain addons-476078 in network mk-addons-476078
	I0505 20:58:40.850516   19551 main.go:141] libmachine: (addons-476078) DBG | I0505 20:58:40.850447   19573 retry.go:31] will retry after 3.963765257s: waiting for machine to come up
	I0505 20:58:44.818526   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:44.819001   19551 main.go:141] libmachine: (addons-476078) Found IP for machine: 192.168.39.102
	I0505 20:58:44.819031   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has current primary IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:44.819040   19551 main.go:141] libmachine: (addons-476078) Reserving static IP address...
	I0505 20:58:44.819406   19551 main.go:141] libmachine: (addons-476078) DBG | unable to find host DHCP lease matching {name: "addons-476078", mac: "52:54:00:48:a4:72", ip: "192.168.39.102"} in network mk-addons-476078
	I0505 20:58:44.886209   19551 main.go:141] libmachine: (addons-476078) Reserved static IP address: 192.168.39.102
	I0505 20:58:44.886231   19551 main.go:141] libmachine: (addons-476078) Waiting for SSH to be available...
	I0505 20:58:44.886242   19551 main.go:141] libmachine: (addons-476078) DBG | Getting to WaitForSSH function...
	I0505 20:58:44.888711   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:44.889178   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:minikube Clientid:01:52:54:00:48:a4:72}
	I0505 20:58:44.889207   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:44.889424   19551 main.go:141] libmachine: (addons-476078) DBG | Using SSH client type: external
	I0505 20:58:44.889454   19551 main.go:141] libmachine: (addons-476078) DBG | Using SSH private key: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/addons-476078/id_rsa (-rw-------)
	I0505 20:58:44.889500   19551 main.go:141] libmachine: (addons-476078) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.102 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18602-11466/.minikube/machines/addons-476078/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0505 20:58:44.889516   19551 main.go:141] libmachine: (addons-476078) DBG | About to run SSH command:
	I0505 20:58:44.889533   19551 main.go:141] libmachine: (addons-476078) DBG | exit 0
	I0505 20:58:45.024268   19551 main.go:141] libmachine: (addons-476078) DBG | SSH cmd err, output: <nil>: 
	I0505 20:58:45.024562   19551 main.go:141] libmachine: (addons-476078) KVM machine creation complete!
	I0505 20:58:45.024896   19551 main.go:141] libmachine: (addons-476078) Calling .GetConfigRaw
	I0505 20:58:45.025418   19551 main.go:141] libmachine: (addons-476078) Calling .DriverName
	I0505 20:58:45.025600   19551 main.go:141] libmachine: (addons-476078) Calling .DriverName
	I0505 20:58:45.025796   19551 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0505 20:58:45.025811   19551 main.go:141] libmachine: (addons-476078) Calling .GetState
	I0505 20:58:45.027072   19551 main.go:141] libmachine: Detecting operating system of created instance...
	I0505 20:58:45.027091   19551 main.go:141] libmachine: Waiting for SSH to be available...
	I0505 20:58:45.027099   19551 main.go:141] libmachine: Getting to WaitForSSH function...
	I0505 20:58:45.027107   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHHostname
	I0505 20:58:45.029206   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:45.029534   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:58:45.029566   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:45.029695   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHPort
	I0505 20:58:45.029871   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:58:45.030021   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:58:45.030161   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHUsername
	I0505 20:58:45.030322   19551 main.go:141] libmachine: Using SSH client type: native
	I0505 20:58:45.030484   19551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0505 20:58:45.030494   19551 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0505 20:58:45.143208   19551 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0505 20:58:45.143231   19551 main.go:141] libmachine: Detecting the provisioner...
	I0505 20:58:45.143241   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHHostname
	I0505 20:58:45.146058   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:45.146469   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:58:45.146505   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:45.146631   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHPort
	I0505 20:58:45.146839   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:58:45.147022   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:58:45.147171   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHUsername
	I0505 20:58:45.147319   19551 main.go:141] libmachine: Using SSH client type: native
	I0505 20:58:45.147469   19551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0505 20:58:45.147495   19551 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0505 20:58:45.260971   19551 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0505 20:58:45.261048   19551 main.go:141] libmachine: found compatible host: buildroot
	I0505 20:58:45.261062   19551 main.go:141] libmachine: Provisioning with buildroot...
	I0505 20:58:45.261074   19551 main.go:141] libmachine: (addons-476078) Calling .GetMachineName
	I0505 20:58:45.261401   19551 buildroot.go:166] provisioning hostname "addons-476078"
	I0505 20:58:45.261429   19551 main.go:141] libmachine: (addons-476078) Calling .GetMachineName
	I0505 20:58:45.261587   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHHostname
	I0505 20:58:45.264079   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:45.264450   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:58:45.264477   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:45.264629   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHPort
	I0505 20:58:45.264792   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:58:45.264961   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:58:45.265120   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHUsername
	I0505 20:58:45.265285   19551 main.go:141] libmachine: Using SSH client type: native
	I0505 20:58:45.265441   19551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0505 20:58:45.265454   19551 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-476078 && echo "addons-476078" | sudo tee /etc/hostname
	I0505 20:58:45.395472   19551 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-476078
	
	I0505 20:58:45.395515   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHHostname
	I0505 20:58:45.398148   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:45.398457   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:58:45.398479   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:45.398663   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHPort
	I0505 20:58:45.398881   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:58:45.399046   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:58:45.399187   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHUsername
	I0505 20:58:45.399347   19551 main.go:141] libmachine: Using SSH client type: native
	I0505 20:58:45.399562   19551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0505 20:58:45.399584   19551 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-476078' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-476078/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-476078' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0505 20:58:45.522547   19551 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0505 20:58:45.522584   19551 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18602-11466/.minikube CaCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18602-11466/.minikube}
	I0505 20:58:45.522611   19551 buildroot.go:174] setting up certificates
	I0505 20:58:45.522629   19551 provision.go:84] configureAuth start
	I0505 20:58:45.522647   19551 main.go:141] libmachine: (addons-476078) Calling .GetMachineName
	I0505 20:58:45.522949   19551 main.go:141] libmachine: (addons-476078) Calling .GetIP
	I0505 20:58:45.525565   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:45.525878   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:58:45.525906   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:45.526046   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHHostname
	I0505 20:58:45.528307   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:45.528663   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:58:45.528692   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:45.528810   19551 provision.go:143] copyHostCerts
	I0505 20:58:45.528900   19551 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem (1123 bytes)
	I0505 20:58:45.529061   19551 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem (1675 bytes)
	I0505 20:58:45.529151   19551 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem (1078 bytes)
	I0505 20:58:45.529234   19551 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem org=jenkins.addons-476078 san=[127.0.0.1 192.168.39.102 addons-476078 localhost minikube]
	I0505 20:58:45.659193   19551 provision.go:177] copyRemoteCerts
	I0505 20:58:45.659265   19551 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0505 20:58:45.659292   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHHostname
	I0505 20:58:45.661779   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:45.662078   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:58:45.662104   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:45.662332   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHPort
	I0505 20:58:45.662518   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:58:45.662674   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHUsername
	I0505 20:58:45.662764   19551 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/addons-476078/id_rsa Username:docker}
	I0505 20:58:45.750236   19551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0505 20:58:45.777175   19551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0505 20:58:45.803498   19551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0505 20:58:45.829258   19551 provision.go:87] duration metric: took 306.614751ms to configureAuth
	I0505 20:58:45.829282   19551 buildroot.go:189] setting minikube options for container-runtime
	I0505 20:58:45.829482   19551 config.go:182] Loaded profile config "addons-476078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 20:58:45.829565   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHHostname
	I0505 20:58:45.832064   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:45.832455   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:58:45.832524   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:45.832710   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHPort
	I0505 20:58:45.832907   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:58:45.833067   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:58:45.833212   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHUsername
	I0505 20:58:45.833366   19551 main.go:141] libmachine: Using SSH client type: native
	I0505 20:58:45.833522   19551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0505 20:58:45.833536   19551 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0505 20:58:46.114998   19551 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0505 20:58:46.115028   19551 main.go:141] libmachine: Checking connection to Docker...
	I0505 20:58:46.115035   19551 main.go:141] libmachine: (addons-476078) Calling .GetURL
	I0505 20:58:46.116350   19551 main.go:141] libmachine: (addons-476078) DBG | Using libvirt version 6000000
	I0505 20:58:46.118448   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:46.118735   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:58:46.118767   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:46.118889   19551 main.go:141] libmachine: Docker is up and running!
	I0505 20:58:46.118909   19551 main.go:141] libmachine: Reticulating splines...
	I0505 20:58:46.118918   19551 client.go:171] duration metric: took 26.278413629s to LocalClient.Create
	I0505 20:58:46.118942   19551 start.go:167] duration metric: took 26.278480373s to libmachine.API.Create "addons-476078"
	I0505 20:58:46.118959   19551 start.go:293] postStartSetup for "addons-476078" (driver="kvm2")
	I0505 20:58:46.118978   19551 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0505 20:58:46.118998   19551 main.go:141] libmachine: (addons-476078) Calling .DriverName
	I0505 20:58:46.119244   19551 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0505 20:58:46.119265   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHHostname
	I0505 20:58:46.121121   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:46.121390   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:58:46.121430   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:46.121544   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHPort
	I0505 20:58:46.121724   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:58:46.121903   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHUsername
	I0505 20:58:46.122026   19551 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/addons-476078/id_rsa Username:docker}
	I0505 20:58:46.211729   19551 ssh_runner.go:195] Run: cat /etc/os-release
	I0505 20:58:46.216656   19551 info.go:137] Remote host: Buildroot 2023.02.9
	I0505 20:58:46.216677   19551 filesync.go:126] Scanning /home/jenkins/minikube-integration/18602-11466/.minikube/addons for local assets ...
	I0505 20:58:46.216743   19551 filesync.go:126] Scanning /home/jenkins/minikube-integration/18602-11466/.minikube/files for local assets ...
	I0505 20:58:46.216766   19551 start.go:296] duration metric: took 97.798979ms for postStartSetup
	I0505 20:58:46.216797   19551 main.go:141] libmachine: (addons-476078) Calling .GetConfigRaw
	I0505 20:58:46.217321   19551 main.go:141] libmachine: (addons-476078) Calling .GetIP
	I0505 20:58:46.219994   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:46.220327   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:58:46.220357   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:46.220525   19551 profile.go:143] Saving config to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/config.json ...
	I0505 20:58:46.220717   19551 start.go:128] duration metric: took 26.397680863s to createHost
	I0505 20:58:46.220741   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHHostname
	I0505 20:58:46.222813   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:46.223117   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:58:46.223151   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:46.223267   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHPort
	I0505 20:58:46.223445   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:58:46.223575   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:58:46.223713   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHUsername
	I0505 20:58:46.223901   19551 main.go:141] libmachine: Using SSH client type: native
	I0505 20:58:46.224061   19551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0505 20:58:46.224072   19551 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0505 20:58:46.336892   19551 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714942726.298577736
	
	I0505 20:58:46.336915   19551 fix.go:216] guest clock: 1714942726.298577736
	I0505 20:58:46.336924   19551 fix.go:229] Guest: 2024-05-05 20:58:46.298577736 +0000 UTC Remote: 2024-05-05 20:58:46.220732058 +0000 UTC m=+26.508674640 (delta=77.845678ms)
	I0505 20:58:46.336947   19551 fix.go:200] guest clock delta is within tolerance: 77.845678ms
	I0505 20:58:46.336954   19551 start.go:83] releasing machines lock for "addons-476078", held for 26.514017864s
	I0505 20:58:46.336980   19551 main.go:141] libmachine: (addons-476078) Calling .DriverName
	I0505 20:58:46.337313   19551 main.go:141] libmachine: (addons-476078) Calling .GetIP
	I0505 20:58:46.340294   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:46.340652   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:58:46.340676   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:46.340835   19551 main.go:141] libmachine: (addons-476078) Calling .DriverName
	I0505 20:58:46.341330   19551 main.go:141] libmachine: (addons-476078) Calling .DriverName
	I0505 20:58:46.341534   19551 main.go:141] libmachine: (addons-476078) Calling .DriverName
	I0505 20:58:46.341618   19551 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0505 20:58:46.341675   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHHostname
	I0505 20:58:46.341725   19551 ssh_runner.go:195] Run: cat /version.json
	I0505 20:58:46.341758   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHHostname
	I0505 20:58:46.344293   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:46.344551   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:58:46.344583   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:46.344702   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:46.344741   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHPort
	I0505 20:58:46.344898   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:58:46.345073   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHUsername
	I0505 20:58:46.345095   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:58:46.345116   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:46.345235   19551 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/addons-476078/id_rsa Username:docker}
	I0505 20:58:46.345323   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHPort
	I0505 20:58:46.345458   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:58:46.345583   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHUsername
	I0505 20:58:46.345729   19551 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/addons-476078/id_rsa Username:docker}
	I0505 20:58:46.453060   19551 ssh_runner.go:195] Run: systemctl --version
	I0505 20:58:46.460876   19551 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0505 20:58:46.637888   19551 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0505 20:58:46.645668   19551 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0505 20:58:46.645732   19551 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0505 20:58:46.662465   19551 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0505 20:58:46.662485   19551 start.go:494] detecting cgroup driver to use...
	I0505 20:58:46.662542   19551 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0505 20:58:46.679392   19551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0505 20:58:46.694801   19551 docker.go:217] disabling cri-docker service (if available) ...
	I0505 20:58:46.694860   19551 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0505 20:58:46.710029   19551 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0505 20:58:46.725451   19551 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0505 20:58:46.846970   19551 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0505 20:58:47.006823   19551 docker.go:233] disabling docker service ...
	I0505 20:58:47.006900   19551 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0505 20:58:47.023437   19551 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0505 20:58:47.037886   19551 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0505 20:58:47.181829   19551 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0505 20:58:47.316772   19551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0505 20:58:47.332048   19551 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0505 20:58:47.352783   19551 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0505 20:58:47.352874   19551 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 20:58:47.364822   19551 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0505 20:58:47.364877   19551 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 20:58:47.376714   19551 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 20:58:47.388959   19551 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 20:58:47.400801   19551 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0505 20:58:47.413168   19551 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 20:58:47.424825   19551 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 20:58:47.443315   19551 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 20:58:47.455948   19551 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0505 20:58:47.467082   19551 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0505 20:58:47.467143   19551 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0505 20:58:47.481949   19551 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0505 20:58:47.492713   19551 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 20:58:47.623992   19551 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0505 20:58:47.766117   19551 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0505 20:58:47.766210   19551 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0505 20:58:47.772258   19551 start.go:562] Will wait 60s for crictl version
	I0505 20:58:47.772327   19551 ssh_runner.go:195] Run: which crictl
	I0505 20:58:47.776548   19551 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0505 20:58:47.817373   19551 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0505 20:58:47.817498   19551 ssh_runner.go:195] Run: crio --version
	I0505 20:58:47.847485   19551 ssh_runner.go:195] Run: crio --version
	I0505 20:58:47.881780   19551 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0505 20:58:47.883282   19551 main.go:141] libmachine: (addons-476078) Calling .GetIP
	I0505 20:58:47.886092   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:47.886404   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:58:47.886436   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:47.886659   19551 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0505 20:58:47.891519   19551 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0505 20:58:47.906291   19551 kubeadm.go:877] updating cluster {Name:addons-476078 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
0 ClusterName:addons-476078 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0505 20:58:47.906407   19551 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0505 20:58:47.906447   19551 ssh_runner.go:195] Run: sudo crictl images --output json
	I0505 20:58:47.941686   19551 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0505 20:58:47.941752   19551 ssh_runner.go:195] Run: which lz4
	I0505 20:58:47.946270   19551 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0505 20:58:47.950885   19551 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0505 20:58:47.950917   19551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0505 20:58:49.512139   19551 crio.go:462] duration metric: took 1.565902638s to copy over tarball
	I0505 20:58:49.512224   19551 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0505 20:58:52.068891   19551 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.556635119s)
	I0505 20:58:52.068924   19551 crio.go:469] duration metric: took 2.556753797s to extract the tarball
	I0505 20:58:52.068937   19551 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0505 20:58:52.108491   19551 ssh_runner.go:195] Run: sudo crictl images --output json
	I0505 20:58:52.155406   19551 crio.go:514] all images are preloaded for cri-o runtime.
	I0505 20:58:52.155437   19551 cache_images.go:84] Images are preloaded, skipping loading
	I0505 20:58:52.155447   19551 kubeadm.go:928] updating node { 192.168.39.102 8443 v1.30.0 crio true true} ...
	I0505 20:58:52.155579   19551 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-476078 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.102
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:addons-476078 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0505 20:58:52.155642   19551 ssh_runner.go:195] Run: crio config
	I0505 20:58:52.202989   19551 cni.go:84] Creating CNI manager for ""
	I0505 20:58:52.203008   19551 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0505 20:58:52.203019   19551 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0505 20:58:52.203038   19551 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.102 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-476078 NodeName:addons-476078 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.102"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.102 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0505 20:58:52.203165   19551 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.102
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-476078"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.102
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.102"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0505 20:58:52.203223   19551 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0505 20:58:52.213721   19551 binaries.go:44] Found k8s binaries, skipping transfer
	I0505 20:58:52.213795   19551 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0505 20:58:52.223423   19551 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0505 20:58:52.241377   19551 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0505 20:58:52.259367   19551 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0505 20:58:52.277579   19551 ssh_runner.go:195] Run: grep 192.168.39.102	control-plane.minikube.internal$ /etc/hosts
	I0505 20:58:52.281843   19551 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.102	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0505 20:58:52.294963   19551 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 20:58:52.417648   19551 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0505 20:58:52.434892   19551 certs.go:68] Setting up /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078 for IP: 192.168.39.102
	I0505 20:58:52.434912   19551 certs.go:194] generating shared ca certs ...
	I0505 20:58:52.434934   19551 certs.go:226] acquiring lock for ca certs: {Name:mk9a8f97724855697074b08194c6247f8f9e5c84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 20:58:52.435079   19551 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key
	I0505 20:58:52.555665   19551 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt ...
	I0505 20:58:52.555693   19551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt: {Name:mke0edbd56f4a544e61431caa27ba4d5ab06e9ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 20:58:52.555845   19551 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key ...
	I0505 20:58:52.555856   19551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key: {Name:mkfcd1b8ff14190bc149d6ff4e622064f68787ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 20:58:52.555920   19551 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key
	I0505 20:58:52.655889   19551 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.crt ...
	I0505 20:58:52.655917   19551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.crt: {Name:mk1f26915abb39dda57f3a5f42e923d93c16b588 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 20:58:52.656059   19551 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key ...
	I0505 20:58:52.656072   19551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key: {Name:mkedd440eedb133e50e3b3b00ea464a51e3ea7ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 20:58:52.656131   19551 certs.go:256] generating profile certs ...
	I0505 20:58:52.656201   19551 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/client.key
	I0505 20:58:52.656223   19551 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/client.crt with IP's: []
	I0505 20:58:52.734141   19551 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/client.crt ...
	I0505 20:58:52.734172   19551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/client.crt: {Name:mk906155bf9b2932840b4dde633971c6458e573f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 20:58:52.734338   19551 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/client.key ...
	I0505 20:58:52.734352   19551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/client.key: {Name:mk0b92a84e45934a4771366a8efb554eb3f13ebe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 20:58:52.734449   19551 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/apiserver.key.2bbb6ab6
	I0505 20:58:52.734472   19551 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/apiserver.crt.2bbb6ab6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.102]
	I0505 20:58:52.787920   19551 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/apiserver.crt.2bbb6ab6 ...
	I0505 20:58:52.787950   19551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/apiserver.crt.2bbb6ab6: {Name:mkecd1630b33ef4018da87ed58b0d4ce2dfdc2bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 20:58:52.788111   19551 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/apiserver.key.2bbb6ab6 ...
	I0505 20:58:52.788127   19551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/apiserver.key.2bbb6ab6: {Name:mk6850b3807c47a8030388d9e2df00e859760544 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 20:58:52.788219   19551 certs.go:381] copying /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/apiserver.crt.2bbb6ab6 -> /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/apiserver.crt
	I0505 20:58:52.788308   19551 certs.go:385] copying /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/apiserver.key.2bbb6ab6 -> /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/apiserver.key
	I0505 20:58:52.788377   19551 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/proxy-client.key
	I0505 20:58:52.788403   19551 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/proxy-client.crt with IP's: []
	I0505 20:58:52.917147   19551 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/proxy-client.crt ...
	I0505 20:58:52.917175   19551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/proxy-client.crt: {Name:mk5227d7b6aadc569f4e72cd5f4cc833e89dc2ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 20:58:52.917349   19551 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/proxy-client.key ...
	I0505 20:58:52.917363   19551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/proxy-client.key: {Name:mk2cc3cfd4eb822fb567db7c94bb8e67039e2892 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 20:58:52.917566   19551 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem (1675 bytes)
	I0505 20:58:52.917619   19551 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem (1078 bytes)
	I0505 20:58:52.917655   19551 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem (1123 bytes)
	I0505 20:58:52.917689   19551 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem (1675 bytes)
	I0505 20:58:52.918259   19551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0505 20:58:52.951260   19551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0505 20:58:52.981474   19551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0505 20:58:53.013443   19551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0505 20:58:53.043503   19551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0505 20:58:53.069988   19551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0505 20:58:53.098788   19551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0505 20:58:53.127765   19551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0505 20:58:53.172749   19551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0505 20:58:53.200287   19551 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0505 20:58:53.218429   19551 ssh_runner.go:195] Run: openssl version
	I0505 20:58:53.224924   19551 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0505 20:58:53.236444   19551 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0505 20:58:53.241512   19551 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  5 20:58 /usr/share/ca-certificates/minikubeCA.pem
	I0505 20:58:53.241567   19551 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0505 20:58:53.247757   19551 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0505 20:58:53.259783   19551 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0505 20:58:53.264416   19551 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0505 20:58:53.264469   19551 kubeadm.go:391] StartCluster: {Name:addons-476078 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 C
lusterName:addons-476078 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 20:58:53.264564   19551 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0505 20:58:53.264625   19551 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0505 20:58:53.304288   19551 cri.go:89] found id: ""
	I0505 20:58:53.304356   19551 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0505 20:58:53.316757   19551 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0505 20:58:53.327832   19551 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0505 20:58:53.338498   19551 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0505 20:58:53.338518   19551 kubeadm.go:156] found existing configuration files:
	
	I0505 20:58:53.338594   19551 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0505 20:58:53.348729   19551 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0505 20:58:53.348789   19551 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0505 20:58:53.359306   19551 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0505 20:58:53.369269   19551 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0505 20:58:53.369324   19551 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0505 20:58:53.379770   19551 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0505 20:58:53.389591   19551 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0505 20:58:53.389637   19551 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0505 20:58:53.400134   19551 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0505 20:58:53.409862   19551 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0505 20:58:53.409892   19551 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0505 20:58:53.420026   19551 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0505 20:58:53.478620   19551 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0505 20:58:53.478726   19551 kubeadm.go:309] [preflight] Running pre-flight checks
	I0505 20:58:53.617536   19551 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0505 20:58:53.617677   19551 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0505 20:58:53.617804   19551 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0505 20:58:53.841994   19551 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0505 20:58:54.044603   19551 out.go:204]   - Generating certificates and keys ...
	I0505 20:58:54.044763   19551 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0505 20:58:54.044851   19551 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0505 20:58:54.044963   19551 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0505 20:58:54.045056   19551 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0505 20:58:54.178170   19551 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0505 20:58:54.222250   19551 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0505 20:58:54.357687   19551 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0505 20:58:54.357851   19551 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-476078 localhost] and IPs [192.168.39.102 127.0.0.1 ::1]
	I0505 20:58:54.510379   19551 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0505 20:58:54.510544   19551 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-476078 localhost] and IPs [192.168.39.102 127.0.0.1 ::1]
	I0505 20:58:54.678675   19551 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0505 20:58:55.017961   19551 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0505 20:58:55.164159   19551 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0505 20:58:55.164280   19551 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0505 20:58:55.226065   19551 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0505 20:58:55.438189   19551 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0505 20:58:55.499677   19551 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0505 20:58:55.708458   19551 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0505 20:58:55.842164   19551 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0505 20:58:55.842381   19551 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0505 20:58:55.845799   19551 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0505 20:58:55.847621   19551 out.go:204]   - Booting up control plane ...
	I0505 20:58:55.847714   19551 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0505 20:58:55.847797   19551 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0505 20:58:55.847889   19551 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0505 20:58:55.864623   19551 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0505 20:58:55.865493   19551 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0505 20:58:55.865563   19551 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0505 20:58:56.017849   19551 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0505 20:58:56.017954   19551 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0505 20:58:57.018316   19551 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001049337s
	I0505 20:58:57.018430   19551 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0505 20:59:02.018068   19551 kubeadm.go:309] [api-check] The API server is healthy after 5.001373355s
	I0505 20:59:02.033433   19551 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0505 20:59:02.553298   19551 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0505 20:59:02.585907   19551 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0505 20:59:02.586264   19551 kubeadm.go:309] [mark-control-plane] Marking the node addons-476078 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0505 20:59:02.600051   19551 kubeadm.go:309] [bootstrap-token] Using token: m2k46n.atcee0it0y39276n
	I0505 20:59:02.601455   19551 out.go:204]   - Configuring RBAC rules ...
	I0505 20:59:02.601568   19551 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0505 20:59:02.609367   19551 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0505 20:59:02.620096   19551 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0505 20:59:02.624949   19551 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0505 20:59:02.627835   19551 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0505 20:59:02.632967   19551 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0505 20:59:02.745274   19551 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0505 20:59:03.190426   19551 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0505 20:59:03.744916   19551 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0505 20:59:03.747008   19551 kubeadm.go:309] 
	I0505 20:59:03.747080   19551 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0505 20:59:03.747101   19551 kubeadm.go:309] 
	I0505 20:59:03.747177   19551 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0505 20:59:03.747189   19551 kubeadm.go:309] 
	I0505 20:59:03.747222   19551 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0505 20:59:03.747268   19551 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0505 20:59:03.747316   19551 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0505 20:59:03.747323   19551 kubeadm.go:309] 
	I0505 20:59:03.747363   19551 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0505 20:59:03.747369   19551 kubeadm.go:309] 
	I0505 20:59:03.747404   19551 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0505 20:59:03.747410   19551 kubeadm.go:309] 
	I0505 20:59:03.747449   19551 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0505 20:59:03.747543   19551 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0505 20:59:03.747659   19551 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0505 20:59:03.747681   19551 kubeadm.go:309] 
	I0505 20:59:03.747783   19551 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0505 20:59:03.747877   19551 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0505 20:59:03.747889   19551 kubeadm.go:309] 
	I0505 20:59:03.748002   19551 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token m2k46n.atcee0it0y39276n \
	I0505 20:59:03.748161   19551 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6a26f18bad1f4f0bdeab00e50a185d34e4c63698b4d623f2ccf5d34207e02541 \
	I0505 20:59:03.748199   19551 kubeadm.go:309] 	--control-plane 
	I0505 20:59:03.748216   19551 kubeadm.go:309] 
	I0505 20:59:03.748325   19551 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0505 20:59:03.748332   19551 kubeadm.go:309] 
	I0505 20:59:03.748408   19551 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token m2k46n.atcee0it0y39276n \
	I0505 20:59:03.748540   19551 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6a26f18bad1f4f0bdeab00e50a185d34e4c63698b4d623f2ccf5d34207e02541 
	I0505 20:59:03.748689   19551 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0505 20:59:03.748703   19551 cni.go:84] Creating CNI manager for ""
	I0505 20:59:03.748713   19551 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0505 20:59:03.750679   19551 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0505 20:59:03.752142   19551 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0505 20:59:03.767520   19551 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0505 20:59:03.788783   19551 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0505 20:59:03.788852   19551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 20:59:03.788852   19551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-476078 minikube.k8s.io/updated_at=2024_05_05T20_59_03_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=182cbbc99574885c654f8e32902368a71f76ddd3 minikube.k8s.io/name=addons-476078 minikube.k8s.io/primary=true
	I0505 20:59:03.846495   19551 ops.go:34] apiserver oom_adj: -16
	I0505 20:59:03.959981   19551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 20:59:04.460385   19551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 20:59:04.960940   19551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 20:59:05.460366   19551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 20:59:05.960786   19551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 20:59:06.460471   19551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 20:59:06.960026   19551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 20:59:07.460246   19551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 20:59:07.959980   19551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 20:59:08.460209   19551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 20:59:08.960611   19551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 20:59:09.460194   19551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 20:59:09.960695   19551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 20:59:10.460651   19551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 20:59:10.960288   19551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 20:59:11.460430   19551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 20:59:11.960700   19551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 20:59:12.460477   19551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 20:59:12.960683   19551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 20:59:13.460332   19551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 20:59:13.961042   19551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 20:59:14.460168   19551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 20:59:14.960455   19551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 20:59:15.460071   19551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 20:59:15.960299   19551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 20:59:16.460865   19551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 20:59:16.552841   19551 kubeadm.go:1107] duration metric: took 12.764049588s to wait for elevateKubeSystemPrivileges
	W0505 20:59:16.552895   19551 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0505 20:59:16.552908   19551 kubeadm.go:393] duration metric: took 23.288442045s to StartCluster
	I0505 20:59:16.552938   19551 settings.go:142] acquiring lock: {Name:mkbe19b7965e4b0b9928cd2b7b56f51dec95b157 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 20:59:16.553096   19551 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18602-11466/kubeconfig
	I0505 20:59:16.553641   19551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/kubeconfig: {Name:mk083e789ff55e795c8f0eb5d298a0e27ad9cdde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 20:59:16.553865   19551 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0505 20:59:16.553891   19551 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0505 20:59:16.555818   19551 out.go:177] * Verifying Kubernetes components...
	I0505 20:59:16.553969   19551 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0505 20:59:16.554089   19551 config.go:182] Loaded profile config "addons-476078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 20:59:16.557304   19551 addons.go:69] Setting yakd=true in profile "addons-476078"
	I0505 20:59:16.557309   19551 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 20:59:16.557321   19551 addons.go:69] Setting default-storageclass=true in profile "addons-476078"
	I0505 20:59:16.557325   19551 addons.go:69] Setting cloud-spanner=true in profile "addons-476078"
	I0505 20:59:16.557303   19551 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-476078"
	I0505 20:59:16.557347   19551 addons.go:69] Setting ingress=true in profile "addons-476078"
	I0505 20:59:16.557361   19551 addons.go:69] Setting ingress-dns=true in profile "addons-476078"
	I0505 20:59:16.557372   19551 addons.go:234] Setting addon cloud-spanner=true in "addons-476078"
	I0505 20:59:16.557380   19551 addons.go:234] Setting addon ingress-dns=true in "addons-476078"
	I0505 20:59:16.557389   19551 addons.go:69] Setting helm-tiller=true in profile "addons-476078"
	I0505 20:59:16.557390   19551 addons.go:69] Setting inspektor-gadget=true in profile "addons-476078"
	I0505 20:59:16.557379   19551 addons.go:69] Setting gcp-auth=true in profile "addons-476078"
	I0505 20:59:16.557396   19551 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-476078"
	I0505 20:59:16.557406   19551 addons.go:234] Setting addon helm-tiller=true in "addons-476078"
	I0505 20:59:16.557416   19551 addons.go:234] Setting addon inspektor-gadget=true in "addons-476078"
	I0505 20:59:16.557429   19551 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-476078"
	I0505 20:59:16.557431   19551 mustload.go:65] Loading cluster: addons-476078
	I0505 20:59:16.557334   19551 addons.go:234] Setting addon yakd=true in "addons-476078"
	I0505 20:59:16.557434   19551 host.go:66] Checking if "addons-476078" exists ...
	I0505 20:59:16.557442   19551 host.go:66] Checking if "addons-476078" exists ...
	I0505 20:59:16.557428   19551 addons.go:69] Setting storage-provisioner=true in profile "addons-476078"
	I0505 20:59:16.557431   19551 addons.go:69] Setting volcano=true in profile "addons-476078"
	I0505 20:59:16.557461   19551 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-476078"
	I0505 20:59:16.557479   19551 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-476078"
	I0505 20:59:16.557480   19551 addons.go:234] Setting addon storage-provisioner=true in "addons-476078"
	I0505 20:59:16.557490   19551 addons.go:234] Setting addon volcano=true in "addons-476078"
	I0505 20:59:16.557495   19551 host.go:66] Checking if "addons-476078" exists ...
	I0505 20:59:16.557355   19551 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-476078"
	I0505 20:59:16.557523   19551 host.go:66] Checking if "addons-476078" exists ...
	I0505 20:59:16.557556   19551 host.go:66] Checking if "addons-476078" exists ...
	I0505 20:59:16.557430   19551 host.go:66] Checking if "addons-476078" exists ...
	I0505 20:59:16.557709   19551 config.go:182] Loaded profile config "addons-476078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 20:59:16.557912   19551 addons.go:69] Setting registry=true in profile "addons-476078"
	I0505 20:59:16.557936   19551 addons.go:234] Setting addon registry=true in "addons-476078"
	I0505 20:59:16.557941   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.557947   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.557956   19551 host.go:66] Checking if "addons-476078" exists ...
	I0505 20:59:16.557958   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.557965   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.557967   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.557975   19551 addons.go:69] Setting volumesnapshots=true in profile "addons-476078"
	I0505 20:59:16.558007   19551 addons.go:234] Setting addon volumesnapshots=true in "addons-476078"
	I0505 20:59:16.557977   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.557941   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.557378   19551 addons.go:234] Setting addon ingress=true in "addons-476078"
	I0505 20:59:16.558037   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.557451   19551 host.go:66] Checking if "addons-476078" exists ...
	I0505 20:59:16.558085   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.558219   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.558231   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.558239   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.558253   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.558285   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.557935   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.558350   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.558352   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.558379   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.558411   19551 host.go:66] Checking if "addons-476078" exists ...
	I0505 20:59:16.557961   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.558421   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.558425   19551 host.go:66] Checking if "addons-476078" exists ...
	I0505 20:59:16.558435   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.557315   19551 addons.go:69] Setting metrics-server=true in profile "addons-476078"
	I0505 20:59:16.558517   19551 addons.go:234] Setting addon metrics-server=true in "addons-476078"
	I0505 20:59:16.557391   19551 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-476078"
	I0505 20:59:16.558654   19551 host.go:66] Checking if "addons-476078" exists ...
	I0505 20:59:16.558664   19551 host.go:66] Checking if "addons-476078" exists ...
	I0505 20:59:16.558665   19551 host.go:66] Checking if "addons-476078" exists ...
	I0505 20:59:16.558719   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.558722   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.558748   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.558862   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.558873   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.579601   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37975
	I0505 20:59:16.579601   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40835
	I0505 20:59:16.579923   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38387
	I0505 20:59:16.580053   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.580187   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.580306   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.580558   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.580587   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.580756   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.580777   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.580909   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.580923   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.581257   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.581312   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.581354   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.581594   19551 main.go:141] libmachine: (addons-476078) Calling .GetState
	I0505 20:59:16.581642   19551 main.go:141] libmachine: (addons-476078) Calling .GetState
	I0505 20:59:16.582066   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.582090   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.586371   19551 addons.go:234] Setting addon default-storageclass=true in "addons-476078"
	I0505 20:59:16.586403   19551 host.go:66] Checking if "addons-476078" exists ...
	I0505 20:59:16.586439   19551 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-476078"
	I0505 20:59:16.586471   19551 host.go:66] Checking if "addons-476078" exists ...
	I0505 20:59:16.586667   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.586697   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.586825   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.586870   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.590389   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42741
	I0505 20:59:16.592024   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.592053   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.592706   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.592742   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.593277   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.593331   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.599529   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44805
	I0505 20:59:16.599536   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43719
	I0505 20:59:16.599545   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.600143   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.600162   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.600461   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.600528   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.601112   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.601146   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.601382   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.601466   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.601477   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.601764   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.601922   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.601936   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.602378   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.602412   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.602597   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.603177   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.603207   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.614145   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37037
	I0505 20:59:16.615073   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.615753   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.615772   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.615923   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36303
	I0505 20:59:16.616131   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.616674   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.616712   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.617293   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.617960   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.617977   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.620560   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.621294   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.621335   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.621897   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43531
	I0505 20:59:16.622371   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.622856   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.622872   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.623207   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.623767   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.623790   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.625716   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46089
	I0505 20:59:16.626581   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.627122   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.627138   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.627461   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.628017   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.628051   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.628281   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39611
	I0505 20:59:16.629186   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.629746   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.629762   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.630093   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.630641   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.630673   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.633558   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40085
	I0505 20:59:16.634046   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.634555   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.634572   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.634937   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.635122   19551 main.go:141] libmachine: (addons-476078) Calling .GetState
	I0505 20:59:16.636989   19551 main.go:141] libmachine: (addons-476078) Calling .DriverName
	I0505 20:59:16.639388   19551 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0505 20:59:16.639211   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40145
	I0505 20:59:16.640832   19551 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0505 20:59:16.640847   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0505 20:59:16.640864   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHHostname
	I0505 20:59:16.641275   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.641688   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.641705   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.642025   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.642605   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.642643   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.644376   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:16.644927   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:59:16.644959   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:16.645126   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHPort
	I0505 20:59:16.645298   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:59:16.645436   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHUsername
	I0505 20:59:16.645559   19551 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/addons-476078/id_rsa Username:docker}
	I0505 20:59:16.646015   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34671
	I0505 20:59:16.646449   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.646937   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.646959   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.647868   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.648440   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.648474   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.649940   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35571
	I0505 20:59:16.650052   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42527
	I0505 20:59:16.650441   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.650499   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.650949   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.650965   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.651101   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.651110   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.651503   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.652149   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.652199   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.652655   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.652827   19551 main.go:141] libmachine: (addons-476078) Calling .GetState
	I0505 20:59:16.654361   19551 main.go:141] libmachine: (addons-476078) Calling .DriverName
	I0505 20:59:16.656392   19551 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0505 20:59:16.657811   19551 out.go:177]   - Using image docker.io/busybox:stable
	I0505 20:59:16.656271   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34953
	I0505 20:59:16.656823   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39929
	I0505 20:59:16.659680   19551 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0505 20:59:16.659701   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0505 20:59:16.659717   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHHostname
	I0505 20:59:16.658234   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.658461   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.658942   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43059
	I0505 20:59:16.659433   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33407
	I0505 20:59:16.660291   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.660315   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.660700   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.661523   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.661539   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.661596   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.661732   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.661741   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.662144   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.662314   19551 main.go:141] libmachine: (addons-476078) Calling .GetState
	I0505 20:59:16.663005   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.663025   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:16.663034   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.663059   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.663261   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.663442   19551 main.go:141] libmachine: (addons-476078) Calling .GetState
	I0505 20:59:16.663533   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.663575   19551 main.go:141] libmachine: (addons-476078) Calling .GetState
	I0505 20:59:16.663641   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:59:16.663658   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:16.663685   19551 main.go:141] libmachine: (addons-476078) Calling .GetState
	I0505 20:59:16.663734   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHPort
	I0505 20:59:16.663886   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:59:16.664000   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHUsername
	I0505 20:59:16.664116   19551 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/addons-476078/id_rsa Username:docker}
	I0505 20:59:16.665014   19551 main.go:141] libmachine: (addons-476078) Calling .DriverName
	I0505 20:59:16.667064   19551 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0505 20:59:16.665932   19551 main.go:141] libmachine: (addons-476078) Calling .DriverName
	I0505 20:59:16.666598   19551 main.go:141] libmachine: (addons-476078) Calling .DriverName
	I0505 20:59:16.666757   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35251
	I0505 20:59:16.667219   19551 main.go:141] libmachine: (addons-476078) Calling .DriverName
	I0505 20:59:16.668338   19551 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0505 20:59:16.668349   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0505 20:59:16.668365   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHHostname
	I0505 20:59:16.669529   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.670870   19551 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.27.0
	I0505 20:59:16.669626   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41401
	I0505 20:59:16.670093   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.670624   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35365
	I0505 20:59:16.671217   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45489
	I0505 20:59:16.672706   19551 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0505 20:59:16.672841   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:16.673466   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.673943   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.673662   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHPort
	I0505 20:59:16.673891   19551 out.go:177]   - Using image docker.io/registry:2.8.3
	I0505 20:59:16.674117   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0505 20:59:16.674150   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:59:16.673918   19551 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0505 20:59:16.674609   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.675461   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.675473   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHHostname
	I0505 20:59:16.675529   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:16.676900   19551 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0505 20:59:16.676915   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0505 20:59:16.676933   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHHostname
	I0505 20:59:16.675548   19551 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0505 20:59:16.674749   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:59:16.674765   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33021
	I0505 20:59:16.674875   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.674987   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.675151   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.674638   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36489
	I0505 20:59:16.675859   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.677240   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37089
	I0505 20:59:16.678980   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.678995   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.679071   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.679143   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHUsername
	I0505 20:59:16.679269   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.679279   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.679408   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.679577   19551 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0505 20:59:16.679591   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0505 20:59:16.679605   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHHostname
	I0505 20:59:16.679845   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.679876   19551 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/addons-476078/id_rsa Username:docker}
	I0505 20:59:16.680445   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.680461   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.680780   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.680826   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.680986   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.681087   19551 main.go:141] libmachine: (addons-476078) Calling .GetState
	I0505 20:59:16.681175   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.681190   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.681701   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45199
	I0505 20:59:16.681810   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:16.681831   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:59:16.681846   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:16.681916   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.681957   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHPort
	I0505 20:59:16.682166   19551 main.go:141] libmachine: (addons-476078) Calling .GetState
	I0505 20:59:16.682194   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:59:16.682299   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHUsername
	I0505 20:59:16.682667   19551 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/addons-476078/id_rsa Username:docker}
	I0505 20:59:16.682683   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45573
	I0505 20:59:16.682696   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:16.683061   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.683112   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.683130   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.683145   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.683369   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.683515   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.683535   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.683613   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.683656   19551 main.go:141] libmachine: (addons-476078) Calling .GetState
	I0505 20:59:16.683781   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.683797   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.683804   19551 main.go:141] libmachine: (addons-476078) Calling .GetState
	I0505 20:59:16.684218   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.684256   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.684299   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.684358   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.684615   19551 main.go:141] libmachine: (addons-476078) Calling .DriverName
	I0505 20:59:16.684666   19551 main.go:141] libmachine: (addons-476078) Calling .GetState
	I0505 20:59:16.686421   19551 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.16
	I0505 20:59:16.687814   19551 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0505 20:59:16.686455   19551 main.go:141] libmachine: (addons-476078) Calling .DriverName
	I0505 20:59:16.685209   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:59:16.685326   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.685555   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHPort
	I0505 20:59:16.686251   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:16.685094   19551 main.go:141] libmachine: (addons-476078) Calling .GetState
	I0505 20:59:16.686891   19551 host.go:66] Checking if "addons-476078" exists ...
	I0505 20:59:16.688096   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.688137   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.688320   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0505 20:59:16.688342   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHHostname
	I0505 20:59:16.688414   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:16.688447   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:59:16.688470   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:16.687061   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHPort
	I0505 20:59:16.687353   19551 main.go:141] libmachine: (addons-476078) Calling .DriverName
	I0505 20:59:16.687648   19551 main.go:141] libmachine: (addons-476078) Calling .DriverName
	I0505 20:59:16.688653   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:59:16.690091   19551 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0505 20:59:16.688903   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.689517   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:59:16.689589   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHUsername
	I0505 20:59:16.690629   19551 main.go:141] libmachine: (addons-476078) Calling .DriverName
	I0505 20:59:16.691236   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.691282   19551 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0505 20:59:16.691916   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:16.692292   19551 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0505 20:59:16.692505   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHUsername
	I0505 20:59:16.693477   19551 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0505 20:59:16.692624   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHPort
	I0505 20:59:16.694830   19551 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0505 20:59:16.694845   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0505 20:59:16.694860   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHHostname
	I0505 20:59:16.693582   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:59:16.696143   19551 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0505 20:59:16.696164   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0505 20:59:16.696179   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHHostname
	I0505 20:59:16.693596   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0505 20:59:16.696235   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHHostname
	I0505 20:59:16.693722   19551 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/addons-476078/id_rsa Username:docker}
	I0505 20:59:16.693755   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:16.696542   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:16.693777   19551 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/addons-476078/id_rsa Username:docker}
	I0505 20:59:16.694903   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:16.694999   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:59:16.696962   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHUsername
	I0505 20:59:16.697029   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:16.697052   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:16.697060   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:16.697075   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:16.697230   19551 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/addons-476078/id_rsa Username:docker}
	I0505 20:59:16.697477   19551 main.go:141] libmachine: (addons-476078) DBG | Closing plugin on server side
	I0505 20:59:16.697502   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:16.697514   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	W0505 20:59:16.697636   19551 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0505 20:59:16.699207   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:16.700473   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:16.700883   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:59:16.700910   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:16.700944   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:59:16.700969   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:16.701087   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHPort
	I0505 20:59:16.701251   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:59:16.701437   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHPort
	I0505 20:59:16.701439   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHUsername
	I0505 20:59:16.701586   19551 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/addons-476078/id_rsa Username:docker}
	I0505 20:59:16.701768   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:59:16.701961   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHUsername
	I0505 20:59:16.702143   19551 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/addons-476078/id_rsa Username:docker}
	I0505 20:59:16.703086   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:16.703547   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:59:16.703572   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:16.703723   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHPort
	I0505 20:59:16.703861   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:59:16.703957   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHUsername
	I0505 20:59:16.704048   19551 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/addons-476078/id_rsa Username:docker}
	I0505 20:59:16.712033   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33691
	I0505 20:59:16.712607   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.713184   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.713204   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.713252   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43273
	I0505 20:59:16.713597   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.713838   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.713868   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34577
	I0505 20:59:16.713939   19551 main.go:141] libmachine: (addons-476078) Calling .GetState
	I0505 20:59:16.714368   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.714390   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.714462   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.714708   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.714903   19551 main.go:141] libmachine: (addons-476078) Calling .GetState
	I0505 20:59:16.714967   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.714989   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.715297   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.715466   19551 main.go:141] libmachine: (addons-476078) Calling .DriverName
	I0505 20:59:16.716223   19551 main.go:141] libmachine: (addons-476078) Calling .DriverName
	I0505 20:59:16.716637   19551 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0505 20:59:16.716652   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0505 20:59:16.716670   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHHostname
	I0505 20:59:16.717235   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39359
	I0505 20:59:16.717285   19551 main.go:141] libmachine: (addons-476078) Calling .DriverName
	I0505 20:59:16.719372   19551 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0505 20:59:16.717691   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.720089   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:16.720747   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:59:16.720776   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:16.721994   19551 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0505 20:59:16.720654   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46591
	I0505 20:59:16.720688   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHPort
	I0505 20:59:16.721334   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.723043   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.724150   19551 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0505 20:59:16.725504   19551 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0505 20:59:16.723220   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:59:16.723368   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.723432   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.727858   19551 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0505 20:59:16.729127   19551 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0505 20:59:16.726948   19551 main.go:141] libmachine: (addons-476078) Calling .GetState
	I0505 20:59:16.726971   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHUsername
	I0505 20:59:16.727294   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.730151   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.731377   19551 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0505 20:59:16.730326   19551 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/addons-476078/id_rsa Username:docker}
	I0505 20:59:16.730465   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.731742   19551 main.go:141] libmachine: (addons-476078) Calling .DriverName
	I0505 20:59:16.733647   19551 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0505 20:59:16.734677   19551 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0505 20:59:16.732754   19551 main.go:141] libmachine: (addons-476078) Calling .GetState
	W0505 20:59:16.733229   19551 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:34158->192.168.39.102:22: read: connection reset by peer
	I0505 20:59:16.736021   19551 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0505 20:59:16.737226   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0505 20:59:16.737231   19551 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0505 20:59:16.737249   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHHostname
	I0505 20:59:16.736045   19551 retry.go:31] will retry after 287.519499ms: ssh: handshake failed: read tcp 192.168.39.1:34158->192.168.39.102:22: read: connection reset by peer
	I0505 20:59:16.737279   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0505 20:59:16.737440   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHHostname
	I0505 20:59:16.737440   19551 main.go:141] libmachine: (addons-476078) Calling .DriverName
	I0505 20:59:16.739340   19551 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0505 20:59:16.740099   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:16.740590   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHPort
	I0505 20:59:16.741275   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHPort
	I0505 20:59:16.741281   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:59:16.741299   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:16.740809   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:16.741250   19551 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0505 20:59:16.741337   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:59:16.742613   19551 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0505 20:59:16.742644   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:16.741427   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:59:16.741464   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:59:16.743991   19551 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0505 20:59:16.744004   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0505 20:59:16.744017   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHHostname
	I0505 20:59:16.744041   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHUsername
	I0505 20:59:16.744115   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHUsername
	I0505 20:59:16.744194   19551 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/addons-476078/id_rsa Username:docker}
	I0505 20:59:16.744255   19551 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/addons-476078/id_rsa Username:docker}
	I0505 20:59:16.746961   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:16.747321   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:59:16.747350   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:16.747475   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHPort
	I0505 20:59:16.747624   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:59:16.747749   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHUsername
	I0505 20:59:16.747856   19551 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/addons-476078/id_rsa Username:docker}
	W0505 20:59:16.748513   19551 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:34166->192.168.39.102:22: read: connection reset by peer
	I0505 20:59:16.748539   19551 retry.go:31] will retry after 353.400197ms: ssh: handshake failed: read tcp 192.168.39.1:34166->192.168.39.102:22: read: connection reset by peer
	W0505 20:59:16.748660   19551 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:34168->192.168.39.102:22: read: connection reset by peer
	I0505 20:59:16.748676   19551 retry.go:31] will retry after 245.1848ms: ssh: handshake failed: read tcp 192.168.39.1:34168->192.168.39.102:22: read: connection reset by peer
	W0505 20:59:16.773951   19551 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:34178->192.168.39.102:22: read: connection reset by peer
	I0505 20:59:16.773976   19551 retry.go:31] will retry after 240.283066ms: ssh: handshake failed: read tcp 192.168.39.1:34178->192.168.39.102:22: read: connection reset by peer
	I0505 20:59:16.927718   19551 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0505 20:59:16.927731   19551 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0505 20:59:16.967596   19551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0505 20:59:17.011049   19551 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0505 20:59:17.011083   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0505 20:59:17.030853   19551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0505 20:59:17.075422   19551 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0505 20:59:17.075450   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0505 20:59:17.111745   19551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0505 20:59:17.116694   19551 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0505 20:59:17.116718   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0505 20:59:17.154648   19551 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0505 20:59:17.154672   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0505 20:59:17.158704   19551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0505 20:59:17.161399   19551 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0505 20:59:17.161419   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0505 20:59:17.180257   19551 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0505 20:59:17.180281   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0505 20:59:17.222716   19551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0505 20:59:17.250353   19551 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0505 20:59:17.250383   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0505 20:59:17.339397   19551 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0505 20:59:17.339423   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0505 20:59:17.350090   19551 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0505 20:59:17.350110   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0505 20:59:17.384464   19551 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0505 20:59:17.384484   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0505 20:59:17.397303   19551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0505 20:59:17.404249   19551 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0505 20:59:17.404273   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0505 20:59:17.444884   19551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0505 20:59:17.468462   19551 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0505 20:59:17.468488   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0505 20:59:17.555354   19551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0505 20:59:17.556518   19551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0505 20:59:17.587828   19551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0505 20:59:17.588019   19551 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0505 20:59:17.588042   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0505 20:59:17.668921   19551 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0505 20:59:17.668956   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0505 20:59:17.698341   19551 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0505 20:59:17.698372   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0505 20:59:17.780921   19551 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0505 20:59:17.780946   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0505 20:59:17.787900   19551 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0505 20:59:17.787917   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0505 20:59:17.876959   19551 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0505 20:59:17.876993   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0505 20:59:17.940401   19551 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0505 20:59:17.940424   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0505 20:59:18.064516   19551 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0505 20:59:18.064540   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0505 20:59:18.128688   19551 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0505 20:59:18.128720   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0505 20:59:18.140619   19551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0505 20:59:18.283601   19551 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0505 20:59:18.283633   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0505 20:59:18.520153   19551 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0505 20:59:18.520177   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0505 20:59:18.524013   19551 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0505 20:59:18.524034   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0505 20:59:18.628736   19551 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0505 20:59:18.628760   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0505 20:59:18.800885   19551 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0505 20:59:18.800908   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0505 20:59:18.972709   19551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0505 20:59:19.021984   19551 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.094176572s)
	I0505 20:59:19.022013   19551 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0505 20:59:19.022022   19551 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.094269417s)
	I0505 20:59:19.022747   19551 node_ready.go:35] waiting up to 6m0s for node "addons-476078" to be "Ready" ...
	I0505 20:59:19.054466   19551 node_ready.go:49] node "addons-476078" has status "Ready":"True"
	I0505 20:59:19.054489   19551 node_ready.go:38] duration metric: took 31.696523ms for node "addons-476078" to be "Ready" ...
	I0505 20:59:19.054498   19551 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0505 20:59:19.072847   19551 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-gnhf4" in "kube-system" namespace to be "Ready" ...
	I0505 20:59:19.152202   19551 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0505 20:59:19.152230   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0505 20:59:19.356460   19551 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0505 20:59:19.356495   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0505 20:59:19.466227   19551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0505 20:59:19.528257   19551 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-476078" context rescaled to 1 replicas
	I0505 20:59:19.630907   19551 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0505 20:59:19.630927   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0505 20:59:19.846601   19551 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0505 20:59:19.846629   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0505 20:59:20.226165   19551 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0505 20:59:20.226186   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0505 20:59:20.676164   19551 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0505 20:59:20.676188   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0505 20:59:20.845068   19551 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0505 20:59:20.845091   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0505 20:59:21.080247   19551 pod_ready.go:102] pod "coredns-7db6d8ff4d-gnhf4" in "kube-system" namespace has status "Ready":"False"
	I0505 20:59:21.284462   19551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0505 20:59:23.405935   19551 pod_ready.go:102] pod "coredns-7db6d8ff4d-gnhf4" in "kube-system" namespace has status "Ready":"False"
	I0505 20:59:23.471678   19551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.50403672s)
	I0505 20:59:23.471698   19551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.440810283s)
	I0505 20:59:23.471729   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:23.471744   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:23.471768   19551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.359989931s)
	I0505 20:59:23.471802   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:23.471817   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:23.471729   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:23.471880   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:23.472049   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:23.472069   19551 main.go:141] libmachine: (addons-476078) DBG | Closing plugin on server side
	I0505 20:59:23.472073   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:23.472105   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:23.472113   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:23.472122   19551 main.go:141] libmachine: (addons-476078) DBG | Closing plugin on server side
	I0505 20:59:23.472154   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:23.472175   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:23.472179   19551 main.go:141] libmachine: (addons-476078) DBG | Closing plugin on server side
	I0505 20:59:23.472185   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:23.472192   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:23.472258   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:23.472325   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:23.472346   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:23.472353   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:23.472376   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:23.472365   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:23.472324   19551 main.go:141] libmachine: (addons-476078) DBG | Closing plugin on server side
	I0505 20:59:23.472453   19551 main.go:141] libmachine: (addons-476078) DBG | Closing plugin on server side
	I0505 20:59:23.472454   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:23.472464   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:23.472671   19551 main.go:141] libmachine: (addons-476078) DBG | Closing plugin on server side
	I0505 20:59:23.472721   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:23.472731   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:23.694472   19551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.535737185s)
	I0505 20:59:23.694517   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:23.694528   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:23.694549   19551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.471798915s)
	I0505 20:59:23.694586   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:23.694598   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:23.694603   19551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.297277226s)
	I0505 20:59:23.694619   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:23.694632   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:23.694646   19551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.249736475s)
	I0505 20:59:23.694673   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:23.694682   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:23.694786   19551 main.go:141] libmachine: (addons-476078) DBG | Closing plugin on server side
	I0505 20:59:23.694828   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:23.694835   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:23.694844   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:23.694847   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:23.694858   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:23.694866   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:23.694873   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:23.694898   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:23.694851   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:23.694911   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:23.694922   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:23.694930   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:23.695017   19551 main.go:141] libmachine: (addons-476078) DBG | Closing plugin on server side
	I0505 20:59:23.695027   19551 main.go:141] libmachine: (addons-476078) DBG | Closing plugin on server side
	I0505 20:59:23.695050   19551 main.go:141] libmachine: (addons-476078) DBG | Closing plugin on server side
	I0505 20:59:23.695053   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:23.695060   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:23.695068   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:23.695075   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:23.695089   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:23.695111   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:23.695310   19551 main.go:141] libmachine: (addons-476078) DBG | Closing plugin on server side
	I0505 20:59:23.695337   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:23.695344   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:23.695495   19551 main.go:141] libmachine: (addons-476078) DBG | Closing plugin on server side
	I0505 20:59:23.695528   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:23.695535   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:23.695543   19551 addons.go:475] Verifying addon registry=true in "addons-476078"
	I0505 20:59:23.697982   19551 out.go:177] * Verifying registry addon...
	I0505 20:59:23.695704   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:23.695722   19551 main.go:141] libmachine: (addons-476078) DBG | Closing plugin on server side
	I0505 20:59:23.699578   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:23.700466   19551 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0505 20:59:23.872551   19551 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0505 20:59:23.872582   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:23.892506   19551 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0505 20:59:23.892537   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHHostname
	I0505 20:59:23.895893   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:23.896311   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:59:23.896341   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:23.896533   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHPort
	I0505 20:59:23.896745   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:59:23.896914   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHUsername
	I0505 20:59:23.897052   19551 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/addons-476078/id_rsa Username:docker}
	I0505 20:59:23.933692   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:23.933717   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:23.934018   19551 main.go:141] libmachine: (addons-476078) DBG | Closing plugin on server side
	I0505 20:59:23.934061   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:23.934072   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:23.957239   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:23.957258   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:23.957562   19551 main.go:141] libmachine: (addons-476078) DBG | Closing plugin on server side
	I0505 20:59:23.957599   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:23.957608   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	W0505 20:59:23.957714   19551 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0505 20:59:24.294391   19551 pod_ready.go:92] pod "coredns-7db6d8ff4d-gnhf4" in "kube-system" namespace has status "Ready":"True"
	I0505 20:59:24.294417   19551 pod_ready.go:81] duration metric: took 5.22153717s for pod "coredns-7db6d8ff4d-gnhf4" in "kube-system" namespace to be "Ready" ...
	I0505 20:59:24.294427   19551 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-gpclx" in "kube-system" namespace to be "Ready" ...
	I0505 20:59:24.348859   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:24.356512   19551 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0505 20:59:24.420462   19551 pod_ready.go:92] pod "coredns-7db6d8ff4d-gpclx" in "kube-system" namespace has status "Ready":"True"
	I0505 20:59:24.420485   19551 pod_ready.go:81] duration metric: took 126.050935ms for pod "coredns-7db6d8ff4d-gpclx" in "kube-system" namespace to be "Ready" ...
	I0505 20:59:24.420494   19551 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-476078" in "kube-system" namespace to be "Ready" ...
	I0505 20:59:24.514662   19551 addons.go:234] Setting addon gcp-auth=true in "addons-476078"
	I0505 20:59:24.514741   19551 host.go:66] Checking if "addons-476078" exists ...
	I0505 20:59:24.515090   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:24.515123   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:24.529975   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39771
	I0505 20:59:24.530404   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:24.530927   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:24.530957   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:24.531314   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:24.534705   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:24.534761   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:24.549833   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46013
	I0505 20:59:24.550318   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:24.550839   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:24.550871   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:24.551206   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:24.551392   19551 main.go:141] libmachine: (addons-476078) Calling .GetState
	I0505 20:59:24.552959   19551 main.go:141] libmachine: (addons-476078) Calling .DriverName
	I0505 20:59:24.553205   19551 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0505 20:59:24.553225   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHHostname
	I0505 20:59:24.555604   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:24.555996   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:59:24.556033   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:24.556349   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHPort
	I0505 20:59:24.556504   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:59:24.556637   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHUsername
	I0505 20:59:24.556793   19551 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/addons-476078/id_rsa Username:docker}
	I0505 20:59:24.569280   19551 pod_ready.go:92] pod "etcd-addons-476078" in "kube-system" namespace has status "Ready":"True"
	I0505 20:59:24.569304   19551 pod_ready.go:81] duration metric: took 148.803044ms for pod "etcd-addons-476078" in "kube-system" namespace to be "Ready" ...
	I0505 20:59:24.569316   19551 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-476078" in "kube-system" namespace to be "Ready" ...
	I0505 20:59:24.699445   19551 pod_ready.go:92] pod "kube-apiserver-addons-476078" in "kube-system" namespace has status "Ready":"True"
	I0505 20:59:24.699474   19551 pod_ready.go:81] duration metric: took 130.149403ms for pod "kube-apiserver-addons-476078" in "kube-system" namespace to be "Ready" ...
	I0505 20:59:24.699500   19551 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-476078" in "kube-system" namespace to be "Ready" ...
	I0505 20:59:24.783743   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:24.788587   19551 pod_ready.go:92] pod "kube-controller-manager-addons-476078" in "kube-system" namespace has status "Ready":"True"
	I0505 20:59:24.788619   19551 pod_ready.go:81] duration metric: took 89.108732ms for pod "kube-controller-manager-addons-476078" in "kube-system" namespace to be "Ready" ...
	I0505 20:59:24.788633   19551 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qrfs4" in "kube-system" namespace to be "Ready" ...
	I0505 20:59:24.842698   19551 pod_ready.go:92] pod "kube-proxy-qrfs4" in "kube-system" namespace has status "Ready":"True"
	I0505 20:59:24.842729   19551 pod_ready.go:81] duration metric: took 54.083291ms for pod "kube-proxy-qrfs4" in "kube-system" namespace to be "Ready" ...
	I0505 20:59:24.842742   19551 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-476078" in "kube-system" namespace to be "Ready" ...
	I0505 20:59:24.891986   19551 pod_ready.go:92] pod "kube-scheduler-addons-476078" in "kube-system" namespace has status "Ready":"True"
	I0505 20:59:24.892013   19551 pod_ready.go:81] duration metric: took 49.262475ms for pod "kube-scheduler-addons-476078" in "kube-system" namespace to be "Ready" ...
	I0505 20:59:24.892026   19551 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-4s79g" in "kube-system" namespace to be "Ready" ...
	I0505 20:59:25.207064   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:25.378647   19551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.823252066s)
	I0505 20:59:25.378705   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:25.378718   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:25.378712   19551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.822155801s)
	I0505 20:59:25.378752   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:25.378771   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:25.378789   19551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.790927629s)
	I0505 20:59:25.378826   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:25.378841   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:25.378909   19551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.406153479s)
	I0505 20:59:25.378944   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:25.378964   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:25.379116   19551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.238179313s)
	I0505 20:59:25.379148   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:25.379161   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:25.381008   19551 main.go:141] libmachine: (addons-476078) DBG | Closing plugin on server side
	I0505 20:59:25.381008   19551 main.go:141] libmachine: (addons-476078) DBG | Closing plugin on server side
	I0505 20:59:25.381029   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:25.381049   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:25.381052   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:25.381061   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:25.381066   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:25.381072   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:25.381033   19551 main.go:141] libmachine: (addons-476078) DBG | Closing plugin on server side
	I0505 20:59:25.381082   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:25.381085   19551 main.go:141] libmachine: (addons-476078) DBG | Closing plugin on server side
	I0505 20:59:25.381090   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:25.381094   19551 main.go:141] libmachine: (addons-476078) DBG | Closing plugin on server side
	I0505 20:59:25.381117   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:25.381123   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:25.381078   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:25.381132   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:25.381138   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:25.381062   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:25.381072   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:25.381159   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:25.381164   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:25.381139   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:25.381168   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:25.381146   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:25.381312   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:25.381325   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:25.381334   19551 addons.go:475] Verifying addon metrics-server=true in "addons-476078"
	I0505 20:59:25.381401   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:25.381421   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:25.383255   19551 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-476078 service yakd-dashboard -n yakd-dashboard
	
	I0505 20:59:25.381515   19551 main.go:141] libmachine: (addons-476078) DBG | Closing plugin on server side
	I0505 20:59:25.381539   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:25.381554   19551 main.go:141] libmachine: (addons-476078) DBG | Closing plugin on server side
	I0505 20:59:25.381572   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:25.381597   19551 main.go:141] libmachine: (addons-476078) DBG | Closing plugin on server side
	I0505 20:59:25.381609   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:25.386110   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:25.386128   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:25.386130   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:25.386151   19551 addons.go:475] Verifying addon ingress=true in "addons-476078"
	I0505 20:59:25.387804   19551 out.go:177] * Verifying ingress addon...
	I0505 20:59:25.389918   19551 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0505 20:59:25.403342   19551 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0505 20:59:25.403363   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:25.705990   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:25.895707   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:26.221012   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:26.427386   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:26.584376   19551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.118099812s)
	W0505 20:59:26.584426   19551 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0505 20:59:26.584450   19551 retry.go:31] will retry after 276.362996ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0505 20:59:26.714819   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:26.861259   19551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0505 20:59:26.894189   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:26.904475   19551 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4s79g" in "kube-system" namespace has status "Ready":"False"
	I0505 20:59:27.207175   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:27.394802   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:27.793564   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:27.934271   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:27.961802   19551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.677278095s)
	I0505 20:59:27.961828   19551 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.408603507s)
	I0505 20:59:27.961847   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:27.961862   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:27.963622   19551 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0505 20:59:27.962197   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:27.962232   19551 main.go:141] libmachine: (addons-476078) DBG | Closing plugin on server side
	I0505 20:59:27.963656   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:27.965113   19551 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0505 20:59:27.966299   19551 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0505 20:59:27.966313   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0505 20:59:27.965125   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:27.966369   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:27.966642   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:27.966659   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:27.966670   19551 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-476078"
	I0505 20:59:27.966677   19551 main.go:141] libmachine: (addons-476078) DBG | Closing plugin on server side
	I0505 20:59:27.968121   19551 out.go:177] * Verifying csi-hostpath-driver addon...
	I0505 20:59:27.970165   19551 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0505 20:59:28.001517   19551 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0505 20:59:28.001546   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:28.047135   19551 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0505 20:59:28.047167   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0505 20:59:28.198485   19551 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0505 20:59:28.198513   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0505 20:59:28.208121   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:28.395526   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:28.409804   19551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0505 20:59:28.477161   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:28.705798   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:28.894987   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:28.975299   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:29.205998   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:29.403686   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:29.407097   19551 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4s79g" in "kube-system" namespace has status "Ready":"False"
	I0505 20:59:29.489818   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:29.710034   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:29.894606   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:29.978368   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:30.209921   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:30.395368   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:30.414264   19551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.552958721s)
	I0505 20:59:30.414319   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:30.414335   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:30.414621   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:30.414640   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:30.414656   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:30.414664   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:30.414901   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:30.414920   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:30.475809   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:30.733541   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:30.861434   19551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.451595064s)
	I0505 20:59:30.861494   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:30.861519   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:30.861796   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:30.861818   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:30.861829   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:30.861839   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:30.861840   19551 main.go:141] libmachine: (addons-476078) DBG | Closing plugin on server side
	I0505 20:59:30.862085   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:30.862107   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:30.863990   19551 addons.go:475] Verifying addon gcp-auth=true in "addons-476078"
	I0505 20:59:30.865638   19551 out.go:177] * Verifying gcp-auth addon...
	I0505 20:59:30.868069   19551 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0505 20:59:30.876700   19551 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0505 20:59:30.876716   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:30.908554   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:30.977272   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:31.206654   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:31.371877   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:31.394872   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:31.477809   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:31.705173   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:31.872125   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:31.894961   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:31.904090   19551 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4s79g" in "kube-system" namespace has status "Ready":"False"
	I0505 20:59:31.976975   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:32.206727   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:32.372126   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:32.396639   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:32.477510   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:32.708364   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:32.872682   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:32.895803   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:32.977431   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:33.206346   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:33.372580   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:33.394472   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:33.475342   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:33.706020   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:33.872242   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:33.894687   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:33.975582   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:34.205664   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:34.371930   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:34.395595   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:34.406035   19551 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4s79g" in "kube-system" namespace has status "Ready":"False"
	I0505 20:59:34.476480   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:34.705443   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:34.872484   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:34.896421   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:34.976701   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:35.221526   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:35.372275   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:35.394838   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:35.557529   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:35.706651   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:35.873475   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:35.894894   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:35.985040   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:36.206592   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:36.371549   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:36.394724   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:36.477224   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:36.706065   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:36.873004   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:36.895047   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:36.898501   19551 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4s79g" in "kube-system" namespace has status "Ready":"False"
	I0505 20:59:36.975674   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:37.205889   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:37.372338   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:37.397914   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:37.477574   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:37.706349   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:37.872835   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:37.894916   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:37.976060   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:38.205298   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:38.372967   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:38.395380   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:38.475512   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:38.707604   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:38.872338   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:38.895162   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:38.976349   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:39.206659   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:39.371459   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:39.396695   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:39.397840   19551 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4s79g" in "kube-system" namespace has status "Ready":"False"
	I0505 20:59:39.483165   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:39.706856   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:39.872445   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:39.896779   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:39.977426   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:40.207114   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:40.371924   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:40.395734   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:40.476456   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:40.706257   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:40.873688   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:40.896935   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:41.150406   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:41.206300   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:41.372440   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:41.394924   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:41.402852   19551 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4s79g" in "kube-system" namespace has status "Ready":"False"
	I0505 20:59:41.476943   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:41.708377   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:41.872908   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:41.895882   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:41.976595   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:42.205494   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:42.371598   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:42.394896   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:42.482487   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:42.706047   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:42.872865   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:42.895839   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:42.977330   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:43.205850   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:43.372418   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:43.394766   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:43.476687   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:43.705129   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:43.872844   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:43.895719   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:43.898556   19551 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4s79g" in "kube-system" namespace has status "Ready":"False"
	I0505 20:59:43.981931   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:44.434388   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:44.435029   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:44.435231   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:44.477370   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:44.704645   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:44.871525   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:44.896530   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:44.978050   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:45.208875   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:45.371820   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:45.394764   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:45.476590   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:45.800325   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:45.873113   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:45.896897   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:45.902486   19551 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4s79g" in "kube-system" namespace has status "Ready":"False"
	I0505 20:59:45.976213   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:46.206719   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:46.371366   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:46.399295   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:46.477352   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:46.706305   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:47.164827   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:47.174479   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:47.175474   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:47.207205   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:47.372284   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:47.394634   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:47.476067   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:47.706548   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:47.872618   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:47.896086   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:47.978781   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:48.206851   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:48.372075   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:48.395203   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:48.398685   19551 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-4s79g" in "kube-system" namespace has status "Ready":"True"
	I0505 20:59:48.398710   19551 pod_ready.go:81] duration metric: took 23.506675049s for pod "nvidia-device-plugin-daemonset-4s79g" in "kube-system" namespace to be "Ready" ...
	I0505 20:59:48.398721   19551 pod_ready.go:38] duration metric: took 29.344212848s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0505 20:59:48.398738   19551 api_server.go:52] waiting for apiserver process to appear ...
	I0505 20:59:48.398797   19551 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 20:59:48.416807   19551 api_server.go:72] duration metric: took 31.862880322s to wait for apiserver process to appear ...
	I0505 20:59:48.416822   19551 api_server.go:88] waiting for apiserver healthz status ...
	I0505 20:59:48.416839   19551 api_server.go:253] Checking apiserver healthz at https://192.168.39.102:8443/healthz ...
	I0505 20:59:48.421720   19551 api_server.go:279] https://192.168.39.102:8443/healthz returned 200:
	ok
	I0505 20:59:48.422596   19551 api_server.go:141] control plane version: v1.30.0
	I0505 20:59:48.422620   19551 api_server.go:131] duration metric: took 5.791761ms to wait for apiserver health ...
	I0505 20:59:48.422631   19551 system_pods.go:43] waiting for kube-system pods to appear ...
	I0505 20:59:48.431221   19551 system_pods.go:59] 18 kube-system pods found
	I0505 20:59:48.431248   19551 system_pods.go:61] "coredns-7db6d8ff4d-gnhf4" [230b69b2-9942-4035-bba5-637a32176daa] Running
	I0505 20:59:48.431255   19551 system_pods.go:61] "csi-hostpath-attacher-0" [9d360d83-ab63-48d2-969c-ef12d5ad5b99] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0505 20:59:48.431261   19551 system_pods.go:61] "csi-hostpath-resizer-0" [1b5b593c-dd13-4bd6-9692-a3b8ec11bcca] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0505 20:59:48.431269   19551 system_pods.go:61] "csi-hostpathplugin-nxl2f" [b71c6ae3-e8a1-49ac-b346-4d7e1a3053b6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0505 20:59:48.431274   19551 system_pods.go:61] "etcd-addons-476078" [7dcbb44a-bd07-4992-95c7-b1fd7be71ee4] Running
	I0505 20:59:48.431314   19551 system_pods.go:61] "kube-apiserver-addons-476078" [38eb3fa4-5e1a-444e-93f9-0ad0a88cb90f] Running
	I0505 20:59:48.431318   19551 system_pods.go:61] "kube-controller-manager-addons-476078" [fda15bec-4567-4ef6-b78a-ddfbb106d504] Running
	I0505 20:59:48.431322   19551 system_pods.go:61] "kube-ingress-dns-minikube" [92b9cc6b-903c-41c2-9101-cc4acb08ee22] Running
	I0505 20:59:48.431326   19551 system_pods.go:61] "kube-proxy-qrfs4" [b627b443-bc49-42d8-ae83-f6893f382003] Running
	I0505 20:59:48.431329   19551 system_pods.go:61] "kube-scheduler-addons-476078" [b0712527-df01-4ef7-a896-261278abedb9] Running
	I0505 20:59:48.431335   19551 system_pods.go:61] "metrics-server-c59844bb4-nsvl8" [8b3d4733-9d64-4587-9ed8-b33c78c6ccf0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0505 20:59:48.431341   19551 system_pods.go:61] "nvidia-device-plugin-daemonset-4s79g" [b7211778-f5aa-4ebe-973a-ac4ee0054143] Running
	I0505 20:59:48.431347   19551 system_pods.go:61] "registry-l4nvm" [6d3660b5-72f0-4cb8-850d-66e3367f0b2d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0505 20:59:48.431355   19551 system_pods.go:61] "registry-proxy-8z9cj" [2b07c767-5f91-4286-b104-2fd55988d9ad] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0505 20:59:48.431363   19551 system_pods.go:61] "snapshot-controller-745499f584-69vg6" [65bdd394-ec86-4879-b54e-cea00657265d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0505 20:59:48.431373   19551 system_pods.go:61] "snapshot-controller-745499f584-drspx" [5c863640-d719-499b-bfcb-0f89b84bcda9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0505 20:59:48.431378   19551 system_pods.go:61] "storage-provisioner" [fd6ddb95-4a5d-4ac3-8b8f-ccdbd02cdce3] Running
	I0505 20:59:48.431386   19551 system_pods.go:61] "tiller-deploy-6677d64bcd-2tngp" [9e6ccc20-fbbd-4495-a454-2e47945c33dc] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0505 20:59:48.431392   19551 system_pods.go:74] duration metric: took 8.75363ms to wait for pod list to return data ...
	I0505 20:59:48.431401   19551 default_sa.go:34] waiting for default service account to be created ...
	I0505 20:59:48.433736   19551 default_sa.go:45] found service account: "default"
	I0505 20:59:48.433754   19551 default_sa.go:55] duration metric: took 2.34573ms for default service account to be created ...
	I0505 20:59:48.433761   19551 system_pods.go:116] waiting for k8s-apps to be running ...
	I0505 20:59:48.441698   19551 system_pods.go:86] 18 kube-system pods found
	I0505 20:59:48.441722   19551 system_pods.go:89] "coredns-7db6d8ff4d-gnhf4" [230b69b2-9942-4035-bba5-637a32176daa] Running
	I0505 20:59:48.441730   19551 system_pods.go:89] "csi-hostpath-attacher-0" [9d360d83-ab63-48d2-969c-ef12d5ad5b99] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0505 20:59:48.441736   19551 system_pods.go:89] "csi-hostpath-resizer-0" [1b5b593c-dd13-4bd6-9692-a3b8ec11bcca] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0505 20:59:48.441745   19551 system_pods.go:89] "csi-hostpathplugin-nxl2f" [b71c6ae3-e8a1-49ac-b346-4d7e1a3053b6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0505 20:59:48.441750   19551 system_pods.go:89] "etcd-addons-476078" [7dcbb44a-bd07-4992-95c7-b1fd7be71ee4] Running
	I0505 20:59:48.441755   19551 system_pods.go:89] "kube-apiserver-addons-476078" [38eb3fa4-5e1a-444e-93f9-0ad0a88cb90f] Running
	I0505 20:59:48.441760   19551 system_pods.go:89] "kube-controller-manager-addons-476078" [fda15bec-4567-4ef6-b78a-ddfbb106d504] Running
	I0505 20:59:48.441764   19551 system_pods.go:89] "kube-ingress-dns-minikube" [92b9cc6b-903c-41c2-9101-cc4acb08ee22] Running
	I0505 20:59:48.441768   19551 system_pods.go:89] "kube-proxy-qrfs4" [b627b443-bc49-42d8-ae83-f6893f382003] Running
	I0505 20:59:48.441772   19551 system_pods.go:89] "kube-scheduler-addons-476078" [b0712527-df01-4ef7-a896-261278abedb9] Running
	I0505 20:59:48.441778   19551 system_pods.go:89] "metrics-server-c59844bb4-nsvl8" [8b3d4733-9d64-4587-9ed8-b33c78c6ccf0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0505 20:59:48.441785   19551 system_pods.go:89] "nvidia-device-plugin-daemonset-4s79g" [b7211778-f5aa-4ebe-973a-ac4ee0054143] Running
	I0505 20:59:48.441792   19551 system_pods.go:89] "registry-l4nvm" [6d3660b5-72f0-4cb8-850d-66e3367f0b2d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0505 20:59:48.441797   19551 system_pods.go:89] "registry-proxy-8z9cj" [2b07c767-5f91-4286-b104-2fd55988d9ad] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0505 20:59:48.441804   19551 system_pods.go:89] "snapshot-controller-745499f584-69vg6" [65bdd394-ec86-4879-b54e-cea00657265d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0505 20:59:48.441810   19551 system_pods.go:89] "snapshot-controller-745499f584-drspx" [5c863640-d719-499b-bfcb-0f89b84bcda9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0505 20:59:48.441817   19551 system_pods.go:89] "storage-provisioner" [fd6ddb95-4a5d-4ac3-8b8f-ccdbd02cdce3] Running
	I0505 20:59:48.441822   19551 system_pods.go:89] "tiller-deploy-6677d64bcd-2tngp" [9e6ccc20-fbbd-4495-a454-2e47945c33dc] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0505 20:59:48.441828   19551 system_pods.go:126] duration metric: took 8.061296ms to wait for k8s-apps to be running ...
	I0505 20:59:48.441835   19551 system_svc.go:44] waiting for kubelet service to be running ....
	I0505 20:59:48.441871   19551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 20:59:48.458387   19551 system_svc.go:56] duration metric: took 16.545926ms WaitForService to wait for kubelet
	I0505 20:59:48.458409   19551 kubeadm.go:576] duration metric: took 31.904483648s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0505 20:59:48.458431   19551 node_conditions.go:102] verifying NodePressure condition ...
	I0505 20:59:48.460892   19551 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0505 20:59:48.460919   19551 node_conditions.go:123] node cpu capacity is 2
	I0505 20:59:48.460933   19551 node_conditions.go:105] duration metric: took 2.497185ms to run NodePressure ...
	I0505 20:59:48.460946   19551 start.go:240] waiting for startup goroutines ...
	I0505 20:59:48.476016   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:48.706131   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:48.873832   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:48.895286   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:48.976339   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:49.208194   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:49.372655   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:49.395655   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:49.476842   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:49.705361   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:49.872493   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:49.894091   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:49.975994   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:50.206244   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:50.372756   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:50.395126   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:50.476631   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:50.705579   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:50.872222   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:50.897032   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:50.979617   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:51.206318   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:51.372477   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:51.398801   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:51.476198   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:51.707013   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:51.872278   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:51.895116   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:51.975926   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:52.205766   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:52.371794   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:52.394529   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:52.476102   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:52.705255   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:52.871943   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:52.895345   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:52.976688   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:53.205684   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:53.371976   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:53.394593   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:53.476181   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:53.706475   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:53.872162   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:53.895944   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:53.975751   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:54.205589   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:54.371624   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:54.394635   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:54.476865   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:54.705803   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:54.872852   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:54.895547   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:54.977636   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:55.206048   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:55.372379   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:55.394543   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:55.477506   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:55.706341   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:55.872722   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:55.894672   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:55.976398   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:56.205888   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:56.372118   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:56.395062   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:56.476355   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:56.707024   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:57.361989   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:57.362619   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:57.365787   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:57.366981   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:57.377252   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:57.394850   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:57.477261   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:57.709931   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:57.872655   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:57.895665   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:57.976241   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:58.205684   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:58.372654   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:58.395112   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:58.475915   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:58.706213   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:58.872413   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:58.894833   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:58.976343   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:59.205664   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:59.371904   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:59.395372   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:59.475754   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:59.704872   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:59.872317   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:59.893916   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:59.976477   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:00.212672   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 21:00:00.372908   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:00.395083   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:00.475348   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:00.704776   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 21:00:00.872105   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:00.894744   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:00.983181   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:01.205714   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 21:00:01.372300   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:01.394560   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:01.476855   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:01.716326   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 21:00:01.872728   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:01.895359   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:01.976938   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:02.206224   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 21:00:02.373961   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:02.395304   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:02.476664   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:02.705588   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 21:00:02.872005   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:02.895510   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:02.976832   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:03.205277   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 21:00:03.372411   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:03.394832   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:03.480477   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:03.704700   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 21:00:03.873874   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:03.896766   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:03.977848   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:04.206441   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 21:00:04.372895   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:04.395022   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:04.476534   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:04.712958   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 21:00:04.873422   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:04.894489   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:04.976815   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:05.206766   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 21:00:05.371963   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:05.395499   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:05.476276   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:05.705351   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 21:00:05.872611   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:05.895156   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:05.976504   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:06.206256   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 21:00:06.373021   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:06.395752   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:06.478042   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:06.706203   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 21:00:06.872787   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:06.897026   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:06.976483   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:07.206256   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 21:00:07.372771   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:07.395205   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:07.476036   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:07.706536   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 21:00:08.200972   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:08.201015   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:08.204336   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:08.209546   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 21:00:08.372031   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:08.395014   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:08.476966   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:08.710237   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 21:00:08.872878   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:08.895623   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:08.977240   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:09.205557   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 21:00:09.374122   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:09.395257   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:09.476731   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:09.706060   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 21:00:09.872531   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:09.894328   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:09.976576   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:10.207796   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 21:00:10.572230   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:10.573007   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:10.574855   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:10.706334   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 21:00:10.872812   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:10.895266   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:10.976459   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:11.206326   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 21:00:11.375881   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:11.398492   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:11.477239   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:11.705566   19551 kapi.go:107] duration metric: took 48.005096188s to wait for kubernetes.io/minikube-addons=registry ...
	I0505 21:00:11.872559   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:11.894919   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:11.977606   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:12.372444   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:12.394933   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:12.477502   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:12.872822   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:12.896877   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:12.981425   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:13.374173   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:13.397418   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:13.476421   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:13.873375   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:13.897002   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:13.976047   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:14.372515   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:14.394716   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:14.476985   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:14.872551   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:14.894714   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:14.976318   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:15.372854   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:15.395905   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:15.485348   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:15.873772   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:15.897978   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:15.979356   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:16.374915   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:16.395725   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:16.477421   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:16.872591   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:16.895678   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:16.977815   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:17.372054   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:17.395589   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:17.481134   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:17.872323   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:17.895166   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:17.976395   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:18.395630   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:18.398508   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:18.480130   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:18.875875   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:18.895555   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:18.976527   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:19.371738   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:19.394556   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:19.476300   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:19.875356   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:19.901203   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:19.979245   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:20.371966   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:20.395521   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:20.476642   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:20.871975   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:20.894997   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:20.976243   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:21.372896   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:21.395443   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:21.476719   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:21.872938   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:21.895282   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:21.977365   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:22.372343   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:22.394680   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:22.477925   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:22.875693   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:22.896402   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:22.977296   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:23.375360   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:23.397828   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:23.477271   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:23.876793   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:23.898179   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:23.976657   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:24.372011   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:24.395093   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:24.476209   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:24.872276   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:24.894398   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:24.976751   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:25.372519   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:25.394762   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:25.477645   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:25.876339   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:25.893958   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:25.977283   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:26.373062   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:26.395403   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:26.477176   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:26.873152   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:26.895733   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:26.977392   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:27.372831   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:27.395331   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:27.485653   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:27.872294   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:27.895868   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:27.978707   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:28.374542   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:28.396389   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:28.477994   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:28.871788   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:28.895129   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:28.976543   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:29.372481   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:29.394874   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:29.475490   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:29.873230   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:29.895983   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:29.979901   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:30.371683   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:30.394545   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:30.613509   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:30.875422   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:30.894466   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:30.976768   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:31.371757   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:31.396805   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:31.478047   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:32.075306   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:32.076954   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:32.077787   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:32.372371   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:32.394568   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:32.480667   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:32.872016   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:32.894953   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:32.975078   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:33.371971   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:33.395180   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:33.476229   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:33.872758   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:33.962673   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:33.979301   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:34.372217   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:34.395457   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:34.483069   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:34.873519   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:34.895999   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:34.976533   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:35.372622   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:35.395351   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:35.477243   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:35.872585   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:35.895360   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:35.976426   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:36.373400   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:36.397022   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:36.475568   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:36.872138   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:36.895872   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:36.976217   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:37.372507   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:37.394526   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:37.476468   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:37.872750   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:37.895792   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:37.976631   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:38.371893   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:38.395209   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:38.477223   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:38.872638   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:38.895212   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:38.976295   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:39.372777   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:39.395621   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:39.476809   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:39.872967   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:39.895731   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:39.976847   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:40.374462   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:40.393712   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:40.480600   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:40.879626   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:40.906032   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:40.979764   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:41.376816   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:41.417998   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:41.477041   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:41.872767   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:41.895358   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:41.976463   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:42.594827   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:42.595433   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:42.598951   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:42.872481   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:42.895256   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:42.979282   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:43.372155   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:43.397489   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:43.476696   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:43.872922   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:43.895327   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:43.983066   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:44.393785   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:44.400949   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:44.490075   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:44.878780   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:44.895440   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:44.975595   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:45.371842   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:45.394909   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:45.476483   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:45.879241   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:45.895729   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:45.976943   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:46.373235   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:46.395742   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:46.476480   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:47.145172   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:47.146189   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:47.146750   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:47.372876   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:47.395445   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:47.476052   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:47.872404   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:47.895793   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:47.977179   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:48.372652   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:48.396002   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:48.476270   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:48.872136   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:48.895798   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:48.986167   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:49.375068   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:49.397732   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:49.475569   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:49.873407   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:49.895846   19551 kapi.go:107] duration metric: took 1m24.505927196s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0505 21:00:49.977340   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:50.374177   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:50.484241   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:50.873407   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:50.976219   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:51.620060   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:51.621805   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:51.873271   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:51.979719   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:52.372887   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:52.476720   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:52.871606   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:52.979739   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:53.372517   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:53.479280   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:53.875703   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:53.977419   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:54.373690   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:54.478148   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:54.874156   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:54.986500   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:55.374542   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:55.477032   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:55.871852   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:55.977545   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:56.373714   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:56.476604   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:56.873197   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:56.977061   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:57.374304   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:57.477398   19551 kapi.go:107] duration metric: took 1m29.507231519s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0505 21:00:57.871936   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:58.373530   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:58.873409   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:59.373070   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:59.872263   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:00.373296   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:00.872446   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:01.374789   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:01.872857   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:02.372371   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:02.874583   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:03.373206   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:03.872123   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:04.372865   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:04.874268   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:05.372491   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:05.875068   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:06.372353   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:06.873503   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:07.372210   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:07.873522   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:08.372609   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:08.874975   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:09.373074   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:09.956295   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:10.372457   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:10.874097   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:11.372220   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:11.874624   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:12.372881   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:12.872477   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:13.372612   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:13.875055   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:14.372867   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:14.872576   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:15.372833   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:15.872862   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:16.374146   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:16.872248   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:17.372633   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:17.873003   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:18.372677   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:18.873528   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:19.374925   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:19.872768   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:20.373580   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:20.873149   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:21.372685   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:21.873024   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:22.372673   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:22.872636   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:23.374715   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:23.872798   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:24.373137   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:24.872673   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:25.372785   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:25.873414   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:26.372395   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:26.872330   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:27.372458   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:27.872507   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:28.373498   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:28.873219   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:29.373505   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:29.872753   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:30.374902   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:30.872930   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:31.371925   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:31.873501   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:32.372667   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:32.873040   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:33.372501   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:33.873776   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:34.372785   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:34.872109   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:35.372382   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:35.872186   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:36.372648   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:36.873061   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:37.372198   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:37.876703   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:38.373285   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:38.872726   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:39.372707   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:39.872170   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:40.373465   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:40.872511   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:41.373335   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:41.872628   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:42.373539   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:42.872999   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:43.371603   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:43.872566   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:44.373086   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:44.872179   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:45.373444   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:45.872549   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:46.372625   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:46.872527   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:47.373455   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:47.871880   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:48.371699   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:48.873343   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:49.372440   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:49.872823   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:50.371894   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:50.872910   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:51.371840   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:51.872870   19551 kapi.go:107] duration metric: took 2m21.00479971s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0505 21:01:51.874704   19551 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-476078 cluster.
	I0505 21:01:51.876328   19551 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0505 21:01:51.877606   19551 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0505 21:01:51.879152   19551 out.go:177] * Enabled addons: ingress-dns, nvidia-device-plugin, storage-provisioner, cloud-spanner, storage-provisioner-rancher, metrics-server, inspektor-gadget, helm-tiller, yakd, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0505 21:01:51.880563   19551 addons.go:510] duration metric: took 2m35.326593725s for enable addons: enabled=[ingress-dns nvidia-device-plugin storage-provisioner cloud-spanner storage-provisioner-rancher metrics-server inspektor-gadget helm-tiller yakd volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0505 21:01:51.880614   19551 start.go:245] waiting for cluster config update ...
	I0505 21:01:51.880632   19551 start.go:254] writing updated cluster config ...
	I0505 21:01:51.880920   19551 ssh_runner.go:195] Run: rm -f paused
	I0505 21:01:51.935210   19551 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0505 21:01:51.937186   19551 out.go:177] * Done! kubectl is now configured to use "addons-476078" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	May 05 21:05:05 addons-476078 crio[679]: time="2024-05-05 21:05:05.554738190Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bf2a0b6a-efa6-4806-a4dd-155831fec629 name=/runtime.v1.RuntimeService/Version
	May 05 21:05:05 addons-476078 crio[679]: time="2024-05-05 21:05:05.556351439Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d43e355e-5c4c-4164-af98-14777552ab8b name=/runtime.v1.ImageService/ImageFsInfo
	May 05 21:05:05 addons-476078 crio[679]: time="2024-05-05 21:05:05.557764583Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714943105557734895,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579668,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d43e355e-5c4c-4164-af98-14777552ab8b name=/runtime.v1.ImageService/ImageFsInfo
	May 05 21:05:05 addons-476078 crio[679]: time="2024-05-05 21:05:05.558593106Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=767bbbd5-4093-475f-8703-792eb425b215 name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:05:05 addons-476078 crio[679]: time="2024-05-05 21:05:05.558716957Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=767bbbd5-4093-475f-8703-792eb425b215 name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:05:05 addons-476078 crio[679]: time="2024-05-05 21:05:05.559006559Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b99ea752b7cad46711807229f07d8f43a6fb4ef08b22d378e07d5eda579a58c3,PodSandboxId:088fffe955b6fb98bbdfa224bc4b0057178baa3a3c9a9adc41602198d7b761e2,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1714943098747125803,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-28xbq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fca35d04-feb9-4aa8-b28e-582ccdde30b3,},Annotations:map[string]string{io.kubernetes.container.hash: e1ec41f,io.kubernetes.container
.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4875e65576390576ddc7bb10fe9f4a135c15f48c22e01f2e26ec76fcea8e3f2d,PodSandboxId:930548ca74204279807e110df6241d75f3a71928df046856904881f918d49a15,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:dd524baac105f5353429a7022c26a02c8c80d95a50cb4d34b6e19a3a4289ff88,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f4215f6ee683f29c0a4611b02d1adc3b7d986a96ab894eb5f7b9437c862c9499,State:CONTAINER_RUNNING,CreatedAt:1714942958350024556,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ddd318b3-f460-41a7-8b57-def112b59f42,},Annotations:map[string]string{io.kuberne
tes.container.hash: 71ff289f,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33d73a3c5fb17d80f5ac83f20ab31b627b7313bb0271ca970f606bc20cc744a1,PodSandboxId:4ad089ecea889b49f9f3583a6616859766f27cad39ed476ceebbfba068396c7c,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1714942922198607959,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7559bf459f-9tvbl,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 0a342843-0e7b-4235-8a87-1ab68db8e982,},Annotations:map[string]string{io.kubernetes.container.hash: 339662e5,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7444e66e637089e620537867e5aee0a435c4cddc878105392e4a9b23574d252a,PodSandboxId:32345da98def5801f0a61a844ee21ae1988070fb778e54314d210352304c49b7,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1714942911152960461,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-j6g6c,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 350b4f6a-6a3b-404f-813f-84fd686ecd8b,},Annotations:map[string]string{io.kubernetes.container.hash: 85d162ce,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f9de9d8ab45c00db0b8ef19b6f7edc9c34c1df029fe91f4f5e4ce2ea80d6c7f,PodSandboxId:ce6559bbc0c22fb1d31e049017719339f47aec95475e4a24596865bb2a6ca094,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:171494
2832163888123,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-2nv87,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 6020ab74-7313-45e6-8080-4e84b676efe6,},Annotations:map[string]string{io.kubernetes.container.hash: 80e8b8e4,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3330869b88ae34030adfb1045b69244f15ff7c7ba74934c355a0cda33b2420df,PodSandboxId:fe1bb8afdd7beb0557defab9cbe03bfe4bd893bd89e05337f80341326822577c,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1714942800071785995,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-nsvl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b3d4733-9d64-4587-9ed8-b33c78c6ccf0,},Annotations:map[string]string{io.kubernetes.container.hash: 924e7843,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd225ed77802f8083a3f863e503a7d3e8feb5443321fc208a3b8a0addb06f9a4,PodSandboxId:c9860ed473d4858386b69e8d662426e54a7450884a6ab91e1a8705cb9b3a6e4d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f561734
2c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714942768900464464,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd6ddb95-4a5d-4ac3-8b8f-ccdbd02cdce3,},Annotations:map[string]string{io.kubernetes.container.hash: f9eee02d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9645db293186a9f14b627c3dc3e3f97c0c6f3d18fbb7b2bba892f8cb5b05948,PodSandboxId:b0dd0025b9663eadd825753c4fa81257b86a6115a6c63bb5159ade58fdff06e6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c007
97ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714942761309610772,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gnhf4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 230b69b2-9942-4035-bba5-637a32176daa,},Annotations:map[string]string{io.kubernetes.container.hash: 27a2bfe1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbf845eacbdf1b723d1056d0e6ebbb49119b17090e3408dcf986522b20f20993,PodSand
boxId:51d78c16bbcdc39b6c1e9f90e2a00e4b80d4b66d9268652840e6686b95d322df,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714942759200801790,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qrfs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b627b443-bc49-42d8-ae83-f6893f382003,},Annotations:map[string]string{io.kubernetes.container.hash: 5382cea5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b082aa54a6b8512d16fa745f2913d0cc46f3bfc8f258886e658467d8c4f95f,PodSandboxId:4de96b20bb9ef3046e406342d12
59f2165032c640bce1d4eeab12c65545372e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714942737426442242,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-476078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fa7b1eda3c5a600ae0b2a0ea78fb243,},Annotations:map[string]string{io.kubernetes.container.hash: ca6ce4b1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5647ed381b79011c0479628e434b280a42eaab6e7fa1c98f85ec48620a5f94f9,PodSandboxId:5d962e468c6ce947137c3b6849400443ce13c4c8941e2771802d2f27f37c948e,Metadata:
&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714942737312474166,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-476078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3920ed2d88d8c0d183cbbde1ee79949,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4465f04bcc1d8801e9187dca5b031fb1551b06f41604b619d0436433e2a65777,PodSandboxId:01e9bfa6c65ce73b9c2b5172b5d3c0256982c5fad1e1f4e8d850a5f5d74154e6,Metadata:&ContainerMetadat
a{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714942737302536455,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-476078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f2eeee73d76512f8cf103629b0adf8,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c82f83da0a7026479dc4c72e0319ebbcfc42e61eee427ce2b0ac46997d8e972,PodSandboxId:2645e8c72e081d1751645cb482df2b6f3508faf426f66c8714d26dc01f62aa09,Metadata:&Contai
nerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714942737338558826,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-476078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a682daeee39870862b84bb87f95a68c7,},Annotations:map[string]string{io.kubernetes.container.hash: a13feaa9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=767bbbd5-4093-475f-8703-792eb425b215 name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:05:05 addons-476078 crio[679]: time="2024-05-05 21:05:05.563767446Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=3013b766-8947-48e6-a475-1bd4b011c069 name=/runtime.v1.RuntimeService/ListPodSandbox
	May 05 21:05:05 addons-476078 crio[679]: time="2024-05-05 21:05:05.564092724Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:088fffe955b6fb98bbdfa224bc4b0057178baa3a3c9a9adc41602198d7b761e2,Metadata:&PodSandboxMetadata{Name:hello-world-app-86c47465fc-28xbq,Uid:fca35d04-feb9-4aa8-b28e-582ccdde30b3,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714943094943892745,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-86c47465fc-28xbq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fca35d04-feb9-4aa8-b28e-582ccdde30b3,pod-template-hash: 86c47465fc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-05T21:04:54.624866360Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:930548ca74204279807e110df6241d75f3a71928df046856904881f918d49a15,Metadata:&PodSandboxMetadata{Name:nginx,Uid:ddd318b3-f460-41a7-8b57-def112b59f42,Namespace:default,Attempt:0,}
,State:SANDBOX_READY,CreatedAt:1714942954020238262,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ddd318b3-f460-41a7-8b57-def112b59f42,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-05T21:02:33.711095831Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4ad089ecea889b49f9f3583a6616859766f27cad39ed476ceebbfba068396c7c,Metadata:&PodSandboxMetadata{Name:headlamp-7559bf459f-9tvbl,Uid:0a342843-0e7b-4235-8a87-1ab68db8e982,Namespace:headlamp,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714942913528946516,Labels:map[string]string{app.kubernetes.io/instance: headlamp,app.kubernetes.io/name: headlamp,io.kubernetes.container.name: POD,io.kubernetes.pod.name: headlamp-7559bf459f-9tvbl,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 0a342843-0e7b-4235-8a87-1ab68db8e982,pod-template-hash: 7559bf459f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-
05-05T21:01:53.218102466Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:32345da98def5801f0a61a844ee21ae1988070fb778e54314d210352304c49b7,Metadata:&PodSandboxMetadata{Name:gcp-auth-5db96cd9b4-j6g6c,Uid:350b4f6a-6a3b-404f-813f-84fd686ecd8b,Namespace:gcp-auth,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714942907109242989,Labels:map[string]string{app: gcp-auth,io.kubernetes.container.name: POD,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-j6g6c,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 350b4f6a-6a3b-404f-813f-84fd686ecd8b,kubernetes.io/minikube-addons: gcp-auth,pod-template-hash: 5db96cd9b4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-05T20:59:30.785672717Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ce6559bbc0c22fb1d31e049017719339f47aec95475e4a24596865bb2a6ca094,Metadata:&PodSandboxMetadata{Name:yakd-dashboard-5ddbf7d777-2nv87,Uid:6020ab74-7313-45e6-8080-4e84b676efe6,Namespace:yakd-dashboard,Attempt:0,},State:SANDBOX_READY,Creat
edAt:1714942767083452256,Labels:map[string]string{app.kubernetes.io/instance: yakd-dashboard,app.kubernetes.io/name: yakd-dashboard,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-2nv87,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 6020ab74-7313-45e6-8080-4e84b676efe6,pod-template-hash: 5ddbf7d777,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-05T20:59:24.427463892Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c9860ed473d4858386b69e8d662426e54a7450884a6ab91e1a8705cb9b3a6e4d,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:fd6ddb95-4a5d-4ac3-8b8f-ccdbd02cdce3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714942766780170162,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: fd6ddb95-4a5d-4ac3-8b8f-ccdbd02cdce3,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-05-05T20:59:23.798130516Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fe1bb8afdd7beb0557defab9cbe03bfe4bd893bd89e05337f80341326822577c,Metadata:&PodSandboxMetadata{Name:metrics-s
erver-c59844bb4-nsvl8,Uid:8b3d4733-9d64-4587-9ed8-b33c78c6ccf0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714942763985044609,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-c59844bb4-nsvl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b3d4733-9d64-4587-9ed8-b33c78c6ccf0,k8s-app: metrics-server,pod-template-hash: c59844bb4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-05T20:59:22.951196516Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:51d78c16bbcdc39b6c1e9f90e2a00e4b80d4b66d9268652840e6686b95d322df,Metadata:&PodSandboxMetadata{Name:kube-proxy-qrfs4,Uid:b627b443-bc49-42d8-ae83-f6893f382003,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714942758409441004,Labels:map[string]string{controller-revision-hash: 79cf874c65,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-qrfs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b627b443-
bc49-42d8-ae83-f6893f382003,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-05T20:59:16.298508290Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b0dd0025b9663eadd825753c4fa81257b86a6115a6c63bb5159ade58fdff06e6,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-gnhf4,Uid:230b69b2-9942-4035-bba5-637a32176daa,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714942758165578725,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-gnhf4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 230b69b2-9942-4035-bba5-637a32176daa,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-05T20:59:16.357880866Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4de96b20bb9ef3046e406342d1259f2165032c640bce1d4eeab12c65545372e0,Metadata:&PodSandboxMetadata{Name:etcd-addons-476078,Uid:0fa7b
1eda3c5a600ae0b2a0ea78fb243,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714942737136059197,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-476078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fa7b1eda3c5a600ae0b2a0ea78fb243,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.102:2379,kubernetes.io/config.hash: 0fa7b1eda3c5a600ae0b2a0ea78fb243,kubernetes.io/config.seen: 2024-05-05T20:58:56.642061484Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2645e8c72e081d1751645cb482df2b6f3508faf426f66c8714d26dc01f62aa09,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-476078,Uid:a682daeee39870862b84bb87f95a68c7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714942737129941492,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-
476078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a682daeee39870862b84bb87f95a68c7,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.102:8443,kubernetes.io/config.hash: a682daeee39870862b84bb87f95a68c7,kubernetes.io/config.seen: 2024-05-05T20:58:56.642065166Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5d962e468c6ce947137c3b6849400443ce13c4c8941e2771802d2f27f37c948e,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-476078,Uid:e3920ed2d88d8c0d183cbbde1ee79949,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714942737125400764,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-476078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3920ed2d88d8c0d183cbbde1ee79949,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e3920ed2d88d8c0d183cbbde1ee79949,
kubernetes.io/config.seen: 2024-05-05T20:58:56.642068199Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:01e9bfa6c65ce73b9c2b5172b5d3c0256982c5fad1e1f4e8d850a5f5d74154e6,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-476078,Uid:26f2eeee73d76512f8cf103629b0adf8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714942737111954870,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-476078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f2eeee73d76512f8cf103629b0adf8,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 26f2eeee73d76512f8cf103629b0adf8,kubernetes.io/config.seen: 2024-05-05T20:58:56.642066381Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=3013b766-8947-48e6-a475-1bd4b011c069 name=/runtime.v1.RuntimeService/ListPodSandbox
	May 05 21:05:05 addons-476078 crio[679]: time="2024-05-05 21:05:05.564934329Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d4a287f7-0cae-4ed5-89eb-822ef3130684 name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:05:05 addons-476078 crio[679]: time="2024-05-05 21:05:05.564992894Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d4a287f7-0cae-4ed5-89eb-822ef3130684 name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:05:05 addons-476078 crio[679]: time="2024-05-05 21:05:05.565300992Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b99ea752b7cad46711807229f07d8f43a6fb4ef08b22d378e07d5eda579a58c3,PodSandboxId:088fffe955b6fb98bbdfa224bc4b0057178baa3a3c9a9adc41602198d7b761e2,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1714943098747125803,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-28xbq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fca35d04-feb9-4aa8-b28e-582ccdde30b3,},Annotations:map[string]string{io.kubernetes.container.hash: e1ec41f,io.kubernetes.container
.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4875e65576390576ddc7bb10fe9f4a135c15f48c22e01f2e26ec76fcea8e3f2d,PodSandboxId:930548ca74204279807e110df6241d75f3a71928df046856904881f918d49a15,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:dd524baac105f5353429a7022c26a02c8c80d95a50cb4d34b6e19a3a4289ff88,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f4215f6ee683f29c0a4611b02d1adc3b7d986a96ab894eb5f7b9437c862c9499,State:CONTAINER_RUNNING,CreatedAt:1714942958350024556,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ddd318b3-f460-41a7-8b57-def112b59f42,},Annotations:map[string]string{io.kuberne
tes.container.hash: 71ff289f,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33d73a3c5fb17d80f5ac83f20ab31b627b7313bb0271ca970f606bc20cc744a1,PodSandboxId:4ad089ecea889b49f9f3583a6616859766f27cad39ed476ceebbfba068396c7c,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1714942922198607959,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7559bf459f-9tvbl,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 0a342843-0e7b-4235-8a87-1ab68db8e982,},Annotations:map[string]string{io.kubernetes.container.hash: 339662e5,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7444e66e637089e620537867e5aee0a435c4cddc878105392e4a9b23574d252a,PodSandboxId:32345da98def5801f0a61a844ee21ae1988070fb778e54314d210352304c49b7,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1714942911152960461,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-j6g6c,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 350b4f6a-6a3b-404f-813f-84fd686ecd8b,},Annotations:map[string]string{io.kubernetes.container.hash: 85d162ce,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f9de9d8ab45c00db0b8ef19b6f7edc9c34c1df029fe91f4f5e4ce2ea80d6c7f,PodSandboxId:ce6559bbc0c22fb1d31e049017719339f47aec95475e4a24596865bb2a6ca094,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:171494
2832163888123,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-2nv87,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 6020ab74-7313-45e6-8080-4e84b676efe6,},Annotations:map[string]string{io.kubernetes.container.hash: 80e8b8e4,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3330869b88ae34030adfb1045b69244f15ff7c7ba74934c355a0cda33b2420df,PodSandboxId:fe1bb8afdd7beb0557defab9cbe03bfe4bd893bd89e05337f80341326822577c,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1714942800071785995,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-nsvl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b3d4733-9d64-4587-9ed8-b33c78c6ccf0,},Annotations:map[string]string{io.kubernetes.container.hash: 924e7843,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd225ed77802f8083a3f863e503a7d3e8feb5443321fc208a3b8a0addb06f9a4,PodSandboxId:c9860ed473d4858386b69e8d662426e54a7450884a6ab91e1a8705cb9b3a6e4d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f561734
2c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714942768900464464,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd6ddb95-4a5d-4ac3-8b8f-ccdbd02cdce3,},Annotations:map[string]string{io.kubernetes.container.hash: f9eee02d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9645db293186a9f14b627c3dc3e3f97c0c6f3d18fbb7b2bba892f8cb5b05948,PodSandboxId:b0dd0025b9663eadd825753c4fa81257b86a6115a6c63bb5159ade58fdff06e6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c007
97ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714942761309610772,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gnhf4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 230b69b2-9942-4035-bba5-637a32176daa,},Annotations:map[string]string{io.kubernetes.container.hash: 27a2bfe1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbf845eacbdf1b723d1056d0e6ebbb49119b17090e3408dcf986522b20f20993,PodSand
boxId:51d78c16bbcdc39b6c1e9f90e2a00e4b80d4b66d9268652840e6686b95d322df,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714942759200801790,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qrfs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b627b443-bc49-42d8-ae83-f6893f382003,},Annotations:map[string]string{io.kubernetes.container.hash: 5382cea5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b082aa54a6b8512d16fa745f2913d0cc46f3bfc8f258886e658467d8c4f95f,PodSandboxId:4de96b20bb9ef3046e406342d12
59f2165032c640bce1d4eeab12c65545372e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714942737426442242,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-476078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fa7b1eda3c5a600ae0b2a0ea78fb243,},Annotations:map[string]string{io.kubernetes.container.hash: ca6ce4b1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5647ed381b79011c0479628e434b280a42eaab6e7fa1c98f85ec48620a5f94f9,PodSandboxId:5d962e468c6ce947137c3b6849400443ce13c4c8941e2771802d2f27f37c948e,Metadata:
&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714942737312474166,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-476078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3920ed2d88d8c0d183cbbde1ee79949,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4465f04bcc1d8801e9187dca5b031fb1551b06f41604b619d0436433e2a65777,PodSandboxId:01e9bfa6c65ce73b9c2b5172b5d3c0256982c5fad1e1f4e8d850a5f5d74154e6,Metadata:&ContainerMetadat
a{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714942737302536455,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-476078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f2eeee73d76512f8cf103629b0adf8,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c82f83da0a7026479dc4c72e0319ebbcfc42e61eee427ce2b0ac46997d8e972,PodSandboxId:2645e8c72e081d1751645cb482df2b6f3508faf426f66c8714d26dc01f62aa09,Metadata:&Contai
nerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714942737338558826,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-476078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a682daeee39870862b84bb87f95a68c7,},Annotations:map[string]string{io.kubernetes.container.hash: a13feaa9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d4a287f7-0cae-4ed5-89eb-822ef3130684 name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:05:05 addons-476078 crio[679]: time="2024-05-05 21:05:05.600248829Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dc1ba94e-4f79-4582-b598-55c363afe9a6 name=/runtime.v1.RuntimeService/Version
	May 05 21:05:05 addons-476078 crio[679]: time="2024-05-05 21:05:05.600577943Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dc1ba94e-4f79-4582-b598-55c363afe9a6 name=/runtime.v1.RuntimeService/Version
	May 05 21:05:05 addons-476078 crio[679]: time="2024-05-05 21:05:05.602118685Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0c25dbed-d558-48ab-bd08-0be3e2290907 name=/runtime.v1.ImageService/ImageFsInfo
	May 05 21:05:05 addons-476078 crio[679]: time="2024-05-05 21:05:05.603288099Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714943105603264032,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579668,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0c25dbed-d558-48ab-bd08-0be3e2290907 name=/runtime.v1.ImageService/ImageFsInfo
	May 05 21:05:05 addons-476078 crio[679]: time="2024-05-05 21:05:05.604072720Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eec97b84-07d8-48cf-94e9-960e542a7214 name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:05:05 addons-476078 crio[679]: time="2024-05-05 21:05:05.604153698Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eec97b84-07d8-48cf-94e9-960e542a7214 name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:05:05 addons-476078 crio[679]: time="2024-05-05 21:05:05.604687259Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b99ea752b7cad46711807229f07d8f43a6fb4ef08b22d378e07d5eda579a58c3,PodSandboxId:088fffe955b6fb98bbdfa224bc4b0057178baa3a3c9a9adc41602198d7b761e2,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1714943098747125803,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-28xbq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fca35d04-feb9-4aa8-b28e-582ccdde30b3,},Annotations:map[string]string{io.kubernetes.container.hash: e1ec41f,io.kubernetes.container
.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4875e65576390576ddc7bb10fe9f4a135c15f48c22e01f2e26ec76fcea8e3f2d,PodSandboxId:930548ca74204279807e110df6241d75f3a71928df046856904881f918d49a15,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:dd524baac105f5353429a7022c26a02c8c80d95a50cb4d34b6e19a3a4289ff88,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f4215f6ee683f29c0a4611b02d1adc3b7d986a96ab894eb5f7b9437c862c9499,State:CONTAINER_RUNNING,CreatedAt:1714942958350024556,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ddd318b3-f460-41a7-8b57-def112b59f42,},Annotations:map[string]string{io.kuberne
tes.container.hash: 71ff289f,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33d73a3c5fb17d80f5ac83f20ab31b627b7313bb0271ca970f606bc20cc744a1,PodSandboxId:4ad089ecea889b49f9f3583a6616859766f27cad39ed476ceebbfba068396c7c,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1714942922198607959,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7559bf459f-9tvbl,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 0a342843-0e7b-4235-8a87-1ab68db8e982,},Annotations:map[string]string{io.kubernetes.container.hash: 339662e5,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7444e66e637089e620537867e5aee0a435c4cddc878105392e4a9b23574d252a,PodSandboxId:32345da98def5801f0a61a844ee21ae1988070fb778e54314d210352304c49b7,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1714942911152960461,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-j6g6c,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 350b4f6a-6a3b-404f-813f-84fd686ecd8b,},Annotations:map[string]string{io.kubernetes.container.hash: 85d162ce,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f9de9d8ab45c00db0b8ef19b6f7edc9c34c1df029fe91f4f5e4ce2ea80d6c7f,PodSandboxId:ce6559bbc0c22fb1d31e049017719339f47aec95475e4a24596865bb2a6ca094,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:171494
2832163888123,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-2nv87,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 6020ab74-7313-45e6-8080-4e84b676efe6,},Annotations:map[string]string{io.kubernetes.container.hash: 80e8b8e4,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3330869b88ae34030adfb1045b69244f15ff7c7ba74934c355a0cda33b2420df,PodSandboxId:fe1bb8afdd7beb0557defab9cbe03bfe4bd893bd89e05337f80341326822577c,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1714942800071785995,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-nsvl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b3d4733-9d64-4587-9ed8-b33c78c6ccf0,},Annotations:map[string]string{io.kubernetes.container.hash: 924e7843,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd225ed77802f8083a3f863e503a7d3e8feb5443321fc208a3b8a0addb06f9a4,PodSandboxId:c9860ed473d4858386b69e8d662426e54a7450884a6ab91e1a8705cb9b3a6e4d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f561734
2c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714942768900464464,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd6ddb95-4a5d-4ac3-8b8f-ccdbd02cdce3,},Annotations:map[string]string{io.kubernetes.container.hash: f9eee02d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9645db293186a9f14b627c3dc3e3f97c0c6f3d18fbb7b2bba892f8cb5b05948,PodSandboxId:b0dd0025b9663eadd825753c4fa81257b86a6115a6c63bb5159ade58fdff06e6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c007
97ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714942761309610772,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gnhf4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 230b69b2-9942-4035-bba5-637a32176daa,},Annotations:map[string]string{io.kubernetes.container.hash: 27a2bfe1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbf845eacbdf1b723d1056d0e6ebbb49119b17090e3408dcf986522b20f20993,PodSand
boxId:51d78c16bbcdc39b6c1e9f90e2a00e4b80d4b66d9268652840e6686b95d322df,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714942759200801790,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qrfs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b627b443-bc49-42d8-ae83-f6893f382003,},Annotations:map[string]string{io.kubernetes.container.hash: 5382cea5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b082aa54a6b8512d16fa745f2913d0cc46f3bfc8f258886e658467d8c4f95f,PodSandboxId:4de96b20bb9ef3046e406342d12
59f2165032c640bce1d4eeab12c65545372e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714942737426442242,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-476078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fa7b1eda3c5a600ae0b2a0ea78fb243,},Annotations:map[string]string{io.kubernetes.container.hash: ca6ce4b1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5647ed381b79011c0479628e434b280a42eaab6e7fa1c98f85ec48620a5f94f9,PodSandboxId:5d962e468c6ce947137c3b6849400443ce13c4c8941e2771802d2f27f37c948e,Metadata:
&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714942737312474166,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-476078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3920ed2d88d8c0d183cbbde1ee79949,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4465f04bcc1d8801e9187dca5b031fb1551b06f41604b619d0436433e2a65777,PodSandboxId:01e9bfa6c65ce73b9c2b5172b5d3c0256982c5fad1e1f4e8d850a5f5d74154e6,Metadata:&ContainerMetadat
a{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714942737302536455,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-476078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f2eeee73d76512f8cf103629b0adf8,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c82f83da0a7026479dc4c72e0319ebbcfc42e61eee427ce2b0ac46997d8e972,PodSandboxId:2645e8c72e081d1751645cb482df2b6f3508faf426f66c8714d26dc01f62aa09,Metadata:&Contai
nerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714942737338558826,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-476078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a682daeee39870862b84bb87f95a68c7,},Annotations:map[string]string{io.kubernetes.container.hash: a13feaa9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eec97b84-07d8-48cf-94e9-960e542a7214 name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:05:05 addons-476078 crio[679]: time="2024-05-05 21:05:05.646915504Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=93e3f6cc-e10a-4fc2-8bde-c72fe5e71e6c name=/runtime.v1.RuntimeService/Version
	May 05 21:05:05 addons-476078 crio[679]: time="2024-05-05 21:05:05.647230374Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=93e3f6cc-e10a-4fc2-8bde-c72fe5e71e6c name=/runtime.v1.RuntimeService/Version
	May 05 21:05:05 addons-476078 crio[679]: time="2024-05-05 21:05:05.654331363Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=75b2242c-46ae-45a0-b66c-008a2ed51fe9 name=/runtime.v1.ImageService/ImageFsInfo
	May 05 21:05:05 addons-476078 crio[679]: time="2024-05-05 21:05:05.655604428Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714943105655574837,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579668,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=75b2242c-46ae-45a0-b66c-008a2ed51fe9 name=/runtime.v1.ImageService/ImageFsInfo
	May 05 21:05:05 addons-476078 crio[679]: time="2024-05-05 21:05:05.656456560Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5079dee4-f48e-4096-98c8-240b788b4dd7 name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:05:05 addons-476078 crio[679]: time="2024-05-05 21:05:05.656519593Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5079dee4-f48e-4096-98c8-240b788b4dd7 name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:05:05 addons-476078 crio[679]: time="2024-05-05 21:05:05.656952573Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b99ea752b7cad46711807229f07d8f43a6fb4ef08b22d378e07d5eda579a58c3,PodSandboxId:088fffe955b6fb98bbdfa224bc4b0057178baa3a3c9a9adc41602198d7b761e2,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1714943098747125803,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-28xbq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fca35d04-feb9-4aa8-b28e-582ccdde30b3,},Annotations:map[string]string{io.kubernetes.container.hash: e1ec41f,io.kubernetes.container
.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4875e65576390576ddc7bb10fe9f4a135c15f48c22e01f2e26ec76fcea8e3f2d,PodSandboxId:930548ca74204279807e110df6241d75f3a71928df046856904881f918d49a15,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:dd524baac105f5353429a7022c26a02c8c80d95a50cb4d34b6e19a3a4289ff88,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f4215f6ee683f29c0a4611b02d1adc3b7d986a96ab894eb5f7b9437c862c9499,State:CONTAINER_RUNNING,CreatedAt:1714942958350024556,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ddd318b3-f460-41a7-8b57-def112b59f42,},Annotations:map[string]string{io.kuberne
tes.container.hash: 71ff289f,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33d73a3c5fb17d80f5ac83f20ab31b627b7313bb0271ca970f606bc20cc744a1,PodSandboxId:4ad089ecea889b49f9f3583a6616859766f27cad39ed476ceebbfba068396c7c,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1714942922198607959,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7559bf459f-9tvbl,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 0a342843-0e7b-4235-8a87-1ab68db8e982,},Annotations:map[string]string{io.kubernetes.container.hash: 339662e5,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7444e66e637089e620537867e5aee0a435c4cddc878105392e4a9b23574d252a,PodSandboxId:32345da98def5801f0a61a844ee21ae1988070fb778e54314d210352304c49b7,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1714942911152960461,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-j6g6c,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 350b4f6a-6a3b-404f-813f-84fd686ecd8b,},Annotations:map[string]string{io.kubernetes.container.hash: 85d162ce,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f9de9d8ab45c00db0b8ef19b6f7edc9c34c1df029fe91f4f5e4ce2ea80d6c7f,PodSandboxId:ce6559bbc0c22fb1d31e049017719339f47aec95475e4a24596865bb2a6ca094,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:171494
2832163888123,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-2nv87,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 6020ab74-7313-45e6-8080-4e84b676efe6,},Annotations:map[string]string{io.kubernetes.container.hash: 80e8b8e4,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3330869b88ae34030adfb1045b69244f15ff7c7ba74934c355a0cda33b2420df,PodSandboxId:fe1bb8afdd7beb0557defab9cbe03bfe4bd893bd89e05337f80341326822577c,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1714942800071785995,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-nsvl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b3d4733-9d64-4587-9ed8-b33c78c6ccf0,},Annotations:map[string]string{io.kubernetes.container.hash: 924e7843,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd225ed77802f8083a3f863e503a7d3e8feb5443321fc208a3b8a0addb06f9a4,PodSandboxId:c9860ed473d4858386b69e8d662426e54a7450884a6ab91e1a8705cb9b3a6e4d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f561734
2c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714942768900464464,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd6ddb95-4a5d-4ac3-8b8f-ccdbd02cdce3,},Annotations:map[string]string{io.kubernetes.container.hash: f9eee02d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9645db293186a9f14b627c3dc3e3f97c0c6f3d18fbb7b2bba892f8cb5b05948,PodSandboxId:b0dd0025b9663eadd825753c4fa81257b86a6115a6c63bb5159ade58fdff06e6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c007
97ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714942761309610772,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gnhf4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 230b69b2-9942-4035-bba5-637a32176daa,},Annotations:map[string]string{io.kubernetes.container.hash: 27a2bfe1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbf845eacbdf1b723d1056d0e6ebbb49119b17090e3408dcf986522b20f20993,PodSand
boxId:51d78c16bbcdc39b6c1e9f90e2a00e4b80d4b66d9268652840e6686b95d322df,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714942759200801790,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qrfs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b627b443-bc49-42d8-ae83-f6893f382003,},Annotations:map[string]string{io.kubernetes.container.hash: 5382cea5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b082aa54a6b8512d16fa745f2913d0cc46f3bfc8f258886e658467d8c4f95f,PodSandboxId:4de96b20bb9ef3046e406342d12
59f2165032c640bce1d4eeab12c65545372e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714942737426442242,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-476078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fa7b1eda3c5a600ae0b2a0ea78fb243,},Annotations:map[string]string{io.kubernetes.container.hash: ca6ce4b1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5647ed381b79011c0479628e434b280a42eaab6e7fa1c98f85ec48620a5f94f9,PodSandboxId:5d962e468c6ce947137c3b6849400443ce13c4c8941e2771802d2f27f37c948e,Metadata:
&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714942737312474166,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-476078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3920ed2d88d8c0d183cbbde1ee79949,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4465f04bcc1d8801e9187dca5b031fb1551b06f41604b619d0436433e2a65777,PodSandboxId:01e9bfa6c65ce73b9c2b5172b5d3c0256982c5fad1e1f4e8d850a5f5d74154e6,Metadata:&ContainerMetadat
a{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714942737302536455,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-476078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f2eeee73d76512f8cf103629b0adf8,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c82f83da0a7026479dc4c72e0319ebbcfc42e61eee427ce2b0ac46997d8e972,PodSandboxId:2645e8c72e081d1751645cb482df2b6f3508faf426f66c8714d26dc01f62aa09,Metadata:&Contai
nerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714942737338558826,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-476078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a682daeee39870862b84bb87f95a68c7,},Annotations:map[string]string{io.kubernetes.container.hash: a13feaa9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5079dee4-f48e-4096-98c8-240b788b4dd7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b99ea752b7cad       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                 7 seconds ago       Running             hello-world-app           0                   088fffe955b6f       hello-world-app-86c47465fc-28xbq
	4875e65576390       docker.io/library/nginx@sha256:dd524baac105f5353429a7022c26a02c8c80d95a50cb4d34b6e19a3a4289ff88                         2 minutes ago       Running             nginx                     0                   930548ca74204       nginx
	33d73a3c5fb17       ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f                   3 minutes ago       Running             headlamp                  0                   4ad089ecea889       headlamp-7559bf459f-9tvbl
	7444e66e63708       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b            3 minutes ago       Running             gcp-auth                  0                   32345da98def5       gcp-auth-5db96cd9b4-j6g6c
	0f9de9d8ab45c       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                         4 minutes ago       Running             yakd                      0                   ce6559bbc0c22       yakd-dashboard-5ddbf7d777-2nv87
	3330869b88ae3       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   5 minutes ago       Running             metrics-server            0                   fe1bb8afdd7be       metrics-server-c59844bb4-nsvl8
	dd225ed77802f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        5 minutes ago       Running             storage-provisioner       0                   c9860ed473d48       storage-provisioner
	b9645db293186       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        5 minutes ago       Running             coredns                   0                   b0dd0025b9663       coredns-7db6d8ff4d-gnhf4
	bbf845eacbdf1       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                                        5 minutes ago       Running             kube-proxy                0                   51d78c16bbcdc       kube-proxy-qrfs4
	37b082aa54a6b       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                        6 minutes ago       Running             etcd                      0                   4de96b20bb9ef       etcd-addons-476078
	7c82f83da0a70       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                                        6 minutes ago       Running             kube-apiserver            0                   2645e8c72e081       kube-apiserver-addons-476078
	5647ed381b790       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                                        6 minutes ago       Running             kube-scheduler            0                   5d962e468c6ce       kube-scheduler-addons-476078
	4465f04bcc1d8       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                                        6 minutes ago       Running             kube-controller-manager   0                   01e9bfa6c65ce       kube-controller-manager-addons-476078
	
	
	==> coredns [b9645db293186a9f14b627c3dc3e3f97c0c6f3d18fbb7b2bba892f8cb5b05948] <==
	[INFO] 10.244.0.7:60000 - 17304 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000288916s
	[INFO] 10.244.0.7:42673 - 1271 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00010519s
	[INFO] 10.244.0.7:42673 - 23029 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000042233s
	[INFO] 10.244.0.7:60915 - 55580 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0000622s
	[INFO] 10.244.0.7:60915 - 13074 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000102583s
	[INFO] 10.244.0.7:53029 - 44331 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000070472s
	[INFO] 10.244.0.7:53029 - 22581 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000060784s
	[INFO] 10.244.0.7:51014 - 58299 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000254878s
	[INFO] 10.244.0.7:51014 - 18247 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00026172s
	[INFO] 10.244.0.7:44546 - 6369 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000298138s
	[INFO] 10.244.0.7:44546 - 58083 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000076978s
	[INFO] 10.244.0.7:43853 - 55745 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000080383s
	[INFO] 10.244.0.7:43853 - 11724 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000034283s
	[INFO] 10.244.0.7:59564 - 34109 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000090581s
	[INFO] 10.244.0.7:59564 - 60987 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00038734s
	[INFO] 10.244.0.22:44000 - 21608 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000535313s
	[INFO] 10.244.0.22:53522 - 50801 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000162607s
	[INFO] 10.244.0.22:46779 - 35167 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000207779s
	[INFO] 10.244.0.22:37589 - 17861 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000147137s
	[INFO] 10.244.0.22:48892 - 16730 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000160378s
	[INFO] 10.244.0.22:60215 - 3806 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000120841s
	[INFO] 10.244.0.22:35297 - 63835 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001541316s
	[INFO] 10.244.0.22:39888 - 6098 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.001612657s
	[INFO] 10.244.0.26:49412 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000409049s
	[INFO] 10.244.0.26:40237 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000094033s
	
	
	==> describe nodes <==
	Name:               addons-476078
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-476078
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=182cbbc99574885c654f8e32902368a71f76ddd3
	                    minikube.k8s.io/name=addons-476078
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_05T20_59_03_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-476078
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 05 May 2024 20:58:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-476078
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 05 May 2024 21:05:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 05 May 2024 21:03:08 +0000   Sun, 05 May 2024 20:58:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 05 May 2024 21:03:08 +0000   Sun, 05 May 2024 20:58:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 05 May 2024 21:03:08 +0000   Sun, 05 May 2024 20:58:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 05 May 2024 21:03:08 +0000   Sun, 05 May 2024 20:59:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.102
	  Hostname:    addons-476078
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 cb49be14c4984910ab2cbdb5bb38e82c
	  System UUID:                cb49be14-c498-4910-ab2c-bdb5bb38e82c
	  Boot ID:                    49930f1f-b9dc-45c3-8200-621abad2788b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-86c47465fc-28xbq         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m32s
	  gcp-auth                    gcp-auth-5db96cd9b4-j6g6c                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m35s
	  headlamp                    headlamp-7559bf459f-9tvbl                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m12s
	  kube-system                 coredns-7db6d8ff4d-gnhf4                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     5m49s
	  kube-system                 etcd-addons-476078                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m5s
	  kube-system                 kube-apiserver-addons-476078             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m2s
	  kube-system                 kube-controller-manager-addons-476078    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m2s
	  kube-system                 kube-proxy-qrfs4                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m49s
	  kube-system                 kube-scheduler-addons-476078             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m2s
	  kube-system                 metrics-server-c59844bb4-nsvl8           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         5m43s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m42s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-2nv87          0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     5m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             498Mi (13%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m45s  kube-proxy       
	  Normal  Starting                 6m2s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m2s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m2s   kubelet          Node addons-476078 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m2s   kubelet          Node addons-476078 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m2s   kubelet          Node addons-476078 status is now: NodeHasSufficientPID
	  Normal  NodeReady                6m1s   kubelet          Node addons-476078 status is now: NodeReady
	  Normal  RegisteredNode           5m49s  node-controller  Node addons-476078 event: Registered Node addons-476078 in Controller
	
	
	==> dmesg <==
	[  +6.163970] kauditd_printk_skb: 139 callbacks suppressed
	[ +14.663027] kauditd_printk_skb: 19 callbacks suppressed
	[  +9.929351] kauditd_printk_skb: 2 callbacks suppressed
	[May 5 21:00] kauditd_printk_skb: 4 callbacks suppressed
	[ +21.638290] kauditd_printk_skb: 28 callbacks suppressed
	[  +6.478882] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.690122] kauditd_printk_skb: 25 callbacks suppressed
	[  +5.369762] kauditd_printk_skb: 51 callbacks suppressed
	[  +5.447949] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.032700] kauditd_printk_skb: 16 callbacks suppressed
	[May 5 21:01] kauditd_printk_skb: 4 callbacks suppressed
	[ +29.959141] kauditd_printk_skb: 26 callbacks suppressed
	[ +13.438532] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.579573] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.034675] kauditd_printk_skb: 23 callbacks suppressed
	[May 5 21:02] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.015891] kauditd_printk_skb: 22 callbacks suppressed
	[  +6.364459] kauditd_printk_skb: 70 callbacks suppressed
	[  +8.698911] kauditd_printk_skb: 15 callbacks suppressed
	[  +7.870348] kauditd_printk_skb: 31 callbacks suppressed
	[  +7.354942] kauditd_printk_skb: 24 callbacks suppressed
	[  +9.051246] kauditd_printk_skb: 22 callbacks suppressed
	[  +9.537837] kauditd_printk_skb: 33 callbacks suppressed
	[May 5 21:04] kauditd_printk_skb: 6 callbacks suppressed
	[May 5 21:05] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [37b082aa54a6b8512d16fa745f2913d0cc46f3bfc8f258886e658467d8c4f95f] <==
	{"level":"info","ts":"2024-05-05T21:00:51.592043Z","caller":"traceutil/trace.go:171","msg":"trace[61689317] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1129; }","duration":"244.338543ms","start":"2024-05-05T21:00:51.347697Z","end":"2024-05-05T21:00:51.592035Z","steps":["trace[61689317] 'agreement among raft nodes before linearized reading'  (duration: 244.196676ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-05T21:00:51.592157Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"142.148411ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85552"}
	{"level":"info","ts":"2024-05-05T21:00:51.592215Z","caller":"traceutil/trace.go:171","msg":"trace[1674021293] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1129; }","duration":"142.22488ms","start":"2024-05-05T21:00:51.449982Z","end":"2024-05-05T21:00:51.592207Z","steps":["trace[1674021293] 'agreement among raft nodes before linearized reading'  (duration: 142.043916ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-05T21:00:51.592011Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"301.634756ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-05T21:00:51.592281Z","caller":"traceutil/trace.go:171","msg":"trace[1423719107] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1129; }","duration":"301.925736ms","start":"2024-05-05T21:00:51.290347Z","end":"2024-05-05T21:00:51.592273Z","steps":["trace[1423719107] 'agreement among raft nodes before linearized reading'  (duration: 301.639037ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-05T21:00:51.592321Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-05T21:00:51.290334Z","time spent":"301.978989ms","remote":"127.0.0.1:45388","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-05-05T21:01:09.929143Z","caller":"traceutil/trace.go:171","msg":"trace[2006913862] linearizableReadLoop","detail":"{readStateIndex:1246; appliedIndex:1245; }","duration":"186.506447ms","start":"2024-05-05T21:01:09.742589Z","end":"2024-05-05T21:01:09.929096Z","steps":["trace[2006913862] 'read index received'  (duration: 186.344887ms)","trace[2006913862] 'applied index is now lower than readState.Index'  (duration: 161.03µs)"],"step_count":2}
	{"level":"info","ts":"2024-05-05T21:01:09.929447Z","caller":"traceutil/trace.go:171","msg":"trace[849019111] transaction","detail":"{read_only:false; response_revision:1204; number_of_response:1; }","duration":"230.096059ms","start":"2024-05-05T21:01:09.699337Z","end":"2024-05-05T21:01:09.929433Z","steps":["trace[849019111] 'process raft request'  (duration: 229.647647ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-05T21:01:09.92954Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.308789ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/\" range_end:\"/registry/daemonsets0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-05-05T21:01:09.930846Z","caller":"traceutil/trace.go:171","msg":"trace[1965804214] range","detail":"{range_begin:/registry/daemonsets/; range_end:/registry/daemonsets0; response_count:0; response_revision:1204; }","duration":"110.70195ms","start":"2024-05-05T21:01:09.820131Z","end":"2024-05-05T21:01:09.930833Z","steps":["trace[1965804214] 'agreement among raft nodes before linearized reading'  (duration: 109.305294ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-05T21:01:09.929752Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"187.154335ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2024-05-05T21:01:09.931115Z","caller":"traceutil/trace.go:171","msg":"trace[1246568627] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1204; }","duration":"188.550432ms","start":"2024-05-05T21:01:09.742554Z","end":"2024-05-05T21:01:09.931104Z","steps":["trace[1246568627] 'agreement among raft nodes before linearized reading'  (duration: 187.011489ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-05T21:02:01.148015Z","caller":"traceutil/trace.go:171","msg":"trace[1692051660] linearizableReadLoop","detail":"{readStateIndex:1415; appliedIndex:1414; }","duration":"256.678693ms","start":"2024-05-05T21:02:00.891309Z","end":"2024-05-05T21:02:01.147988Z","steps":["trace[1692051660] 'read index received'  (duration: 256.496184ms)","trace[1692051660] 'applied index is now lower than readState.Index'  (duration: 181.942µs)"],"step_count":2}
	{"level":"info","ts":"2024-05-05T21:02:01.148132Z","caller":"traceutil/trace.go:171","msg":"trace[1637680112] transaction","detail":"{read_only:false; response_revision:1361; number_of_response:1; }","duration":"276.59198ms","start":"2024-05-05T21:02:00.871531Z","end":"2024-05-05T21:02:01.148123Z","steps":["trace[1637680112] 'process raft request'  (duration: 276.325721ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-05T21:02:01.148409Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"257.078282ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2024-05-05T21:02:01.14847Z","caller":"traceutil/trace.go:171","msg":"trace[1958757565] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1361; }","duration":"257.175848ms","start":"2024-05-05T21:02:00.891285Z","end":"2024-05-05T21:02:01.148461Z","steps":["trace[1958757565] 'agreement among raft nodes before linearized reading'  (duration: 257.042271ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-05T21:02:01.148852Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"191.132953ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85982"}
	{"level":"info","ts":"2024-05-05T21:02:01.148937Z","caller":"traceutil/trace.go:171","msg":"trace[1286363820] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1361; }","duration":"191.244517ms","start":"2024-05-05T21:02:00.957685Z","end":"2024-05-05T21:02:01.14893Z","steps":["trace[1286363820] 'agreement among raft nodes before linearized reading'  (duration: 190.92352ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-05T21:02:17.953831Z","caller":"traceutil/trace.go:171","msg":"trace[458017061] linearizableReadLoop","detail":"{readStateIndex:1579; appliedIndex:1578; }","duration":"114.854288ms","start":"2024-05-05T21:02:17.83894Z","end":"2024-05-05T21:02:17.953794Z","steps":["trace[458017061] 'read index received'  (duration: 114.471128ms)","trace[458017061] 'applied index is now lower than readState.Index'  (duration: 382.668µs)"],"step_count":2}
	{"level":"warn","ts":"2024-05-05T21:02:17.954061Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.091875ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:554"}
	{"level":"info","ts":"2024-05-05T21:02:17.954094Z","caller":"traceutil/trace.go:171","msg":"trace[335140393] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1518; }","duration":"115.170965ms","start":"2024-05-05T21:02:17.838915Z","end":"2024-05-05T21:02:17.954086Z","steps":["trace[335140393] 'agreement among raft nodes before linearized reading'  (duration: 115.027794ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-05T21:02:17.954301Z","caller":"traceutil/trace.go:171","msg":"trace[2052233926] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1518; }","duration":"387.059018ms","start":"2024-05-05T21:02:17.567236Z","end":"2024-05-05T21:02:17.954295Z","steps":["trace[2052233926] 'process raft request'  (duration: 386.272916ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-05T21:02:17.954496Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-05T21:02:17.567225Z","time spent":"387.105791ms","remote":"127.0.0.1:45808","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":51,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/deployments/kube-system/tiller-deploy\" mod_revision:945 > success:<request_delete_range:<key:\"/registry/deployments/kube-system/tiller-deploy\" > > failure:<request_range:<key:\"/registry/deployments/kube-system/tiller-deploy\" > >"}
	{"level":"warn","ts":"2024-05-05T21:02:22.088101Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"178.963489ms","expected-duration":"100ms","prefix":"","request":"header:<ID:12753788745123965929 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/gadget/gadget-9p6xb.17ccb36e996f8bf0\" mod_revision:1229 > success:<request_delete_range:<key:\"/registry/events/gadget/gadget-9p6xb.17ccb36e996f8bf0\" > > failure:<request_range:<key:\"/registry/events/gadget/gadget-9p6xb.17ccb36e996f8bf0\" > >>","response":"size:18"}
	{"level":"info","ts":"2024-05-05T21:02:22.088202Z","caller":"traceutil/trace.go:171","msg":"trace[1301753871] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1548; }","duration":"292.151907ms","start":"2024-05-05T21:02:21.796039Z","end":"2024-05-05T21:02:22.088191Z","steps":["trace[1301753871] 'process raft request'  (duration: 112.788554ms)","trace[1301753871] 'compare'  (duration: 178.891891ms)"],"step_count":2}
	
	
	==> gcp-auth [7444e66e637089e620537867e5aee0a435c4cddc878105392e4a9b23574d252a] <==
	2024/05/05 21:01:51 GCP Auth Webhook started!
	2024/05/05 21:01:52 Ready to marshal response ...
	2024/05/05 21:01:52 Ready to write response ...
	2024/05/05 21:01:52 Ready to marshal response ...
	2024/05/05 21:01:52 Ready to write response ...
	2024/05/05 21:01:53 Ready to marshal response ...
	2024/05/05 21:01:53 Ready to write response ...
	2024/05/05 21:01:53 Ready to marshal response ...
	2024/05/05 21:01:53 Ready to write response ...
	2024/05/05 21:01:53 Ready to marshal response ...
	2024/05/05 21:01:53 Ready to write response ...
	2024/05/05 21:02:04 Ready to marshal response ...
	2024/05/05 21:02:04 Ready to write response ...
	2024/05/05 21:02:07 Ready to marshal response ...
	2024/05/05 21:02:07 Ready to write response ...
	2024/05/05 21:02:10 Ready to marshal response ...
	2024/05/05 21:02:10 Ready to write response ...
	2024/05/05 21:02:10 Ready to marshal response ...
	2024/05/05 21:02:10 Ready to write response ...
	2024/05/05 21:02:33 Ready to marshal response ...
	2024/05/05 21:02:33 Ready to write response ...
	2024/05/05 21:02:37 Ready to marshal response ...
	2024/05/05 21:02:37 Ready to write response ...
	2024/05/05 21:04:54 Ready to marshal response ...
	2024/05/05 21:04:54 Ready to write response ...
	
	
	==> kernel <==
	 21:05:06 up 6 min,  0 users,  load average: 0.28, 1.24, 0.75
	Linux addons-476078 5.10.207 #1 SMP Tue Apr 30 22:38:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [7c82f83da0a7026479dc4c72e0319ebbcfc42e61eee427ce2b0ac46997d8e972] <==
	E0505 21:01:03.910559       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.223.33:443/apis/metrics.k8s.io/v1beta1: Get "https://10.110.223.33:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.110.223.33:443: connect: connection refused
	E0505 21:01:03.917252       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.223.33:443/apis/metrics.k8s.io/v1beta1: Get "https://10.110.223.33:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.110.223.33:443: connect: connection refused
	I0505 21:01:03.988046       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0505 21:01:53.105951       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.57.201"}
	I0505 21:02:16.737555       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0505 21:02:17.798056       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0505 21:02:23.821365       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0505 21:02:24.123442       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"gadget\" not found]"
	I0505 21:02:29.603857       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0505 21:02:33.567433       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0505 21:02:33.748335       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.222.71"}
	I0505 21:02:56.504439       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0505 21:02:56.504533       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0505 21:02:56.537119       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0505 21:02:56.537161       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0505 21:02:56.540155       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0505 21:02:56.540219       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0505 21:02:56.581424       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0505 21:02:56.581500       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0505 21:02:56.582920       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0505 21:02:56.583588       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0505 21:02:57.541389       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0505 21:02:57.584110       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0505 21:02:57.617143       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0505 21:04:54.769080       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.105.140.40"}
	
	
	==> kube-controller-manager [4465f04bcc1d8801e9187dca5b031fb1551b06f41604b619d0436433e2a65777] <==
	E0505 21:03:27.752273       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0505 21:03:33.796456       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0505 21:03:33.796560       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0505 21:03:35.635825       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0505 21:03:35.635879       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0505 21:04:05.730498       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0505 21:04:05.730612       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0505 21:04:15.994997       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0505 21:04:15.995187       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0505 21:04:16.854398       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0505 21:04:16.854463       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0505 21:04:25.895072       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0505 21:04:25.895177       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0505 21:04:46.372288       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0505 21:04:46.372400       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0505 21:04:54.646271       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="69.072396ms"
	I0505 21:04:54.676855       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="30.381177ms"
	I0505 21:04:54.677597       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="63.967µs"
	I0505 21:04:57.621866       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0505 21:04:57.630048       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-768f948f8f" duration="4.068µs"
	I0505 21:04:57.639026       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	W0505 21:04:58.350907       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0505 21:04:58.350999       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0505 21:04:59.556001       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="14.81748ms"
	I0505 21:04:59.556615       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="157.312µs"
	
	
	==> kube-proxy [bbf845eacbdf1b723d1056d0e6ebbb49119b17090e3408dcf986522b20f20993] <==
	I0505 20:59:19.919104       1 server_linux.go:69] "Using iptables proxy"
	I0505 20:59:19.937571       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.102"]
	I0505 20:59:20.163671       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0505 20:59:20.163712       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0505 20:59:20.163728       1 server_linux.go:165] "Using iptables Proxier"
	I0505 20:59:20.177250       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0505 20:59:20.177412       1 server.go:872] "Version info" version="v1.30.0"
	I0505 20:59:20.177428       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0505 20:59:20.178829       1 config.go:192] "Starting service config controller"
	I0505 20:59:20.178839       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0505 20:59:20.178864       1 config.go:101] "Starting endpoint slice config controller"
	I0505 20:59:20.178869       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0505 20:59:20.179253       1 config.go:319] "Starting node config controller"
	I0505 20:59:20.179294       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0505 20:59:20.279884       1 shared_informer.go:320] Caches are synced for node config
	I0505 20:59:20.279914       1 shared_informer.go:320] Caches are synced for service config
	I0505 20:59:20.279937       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [5647ed381b79011c0479628e434b280a42eaab6e7fa1c98f85ec48620a5f94f9] <==
	W0505 20:59:00.027827       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0505 20:59:00.030320       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0505 20:59:00.027869       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0505 20:59:00.027905       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0505 20:59:00.032820       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0505 20:59:00.032888       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0505 20:59:00.913584       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0505 20:59:00.913708       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0505 20:59:00.963112       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0505 20:59:00.963191       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0505 20:59:01.129163       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0505 20:59:01.129219       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0505 20:59:01.133067       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0505 20:59:01.133129       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0505 20:59:01.170428       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0505 20:59:01.170484       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0505 20:59:01.226127       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0505 20:59:01.226182       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0505 20:59:01.237895       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0505 20:59:01.238008       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0505 20:59:01.289050       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0505 20:59:01.289126       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0505 20:59:01.456089       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0505 20:59:01.456147       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0505 20:59:04.085169       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 05 21:04:56 addons-476078 kubelet[1267]: I0505 21:04:56.479845    1267 scope.go:117] "RemoveContainer" containerID="980cf38bbb307399ab974c621f524b4fee6597f21e102d4a4330c299911789fa"
	May 05 21:04:56 addons-476078 kubelet[1267]: I0505 21:04:56.525458    1267 scope.go:117] "RemoveContainer" containerID="980cf38bbb307399ab974c621f524b4fee6597f21e102d4a4330c299911789fa"
	May 05 21:04:56 addons-476078 kubelet[1267]: E0505 21:04:56.526064    1267 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"980cf38bbb307399ab974c621f524b4fee6597f21e102d4a4330c299911789fa\": container with ID starting with 980cf38bbb307399ab974c621f524b4fee6597f21e102d4a4330c299911789fa not found: ID does not exist" containerID="980cf38bbb307399ab974c621f524b4fee6597f21e102d4a4330c299911789fa"
	May 05 21:04:56 addons-476078 kubelet[1267]: I0505 21:04:56.526128    1267 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"980cf38bbb307399ab974c621f524b4fee6597f21e102d4a4330c299911789fa"} err="failed to get container status \"980cf38bbb307399ab974c621f524b4fee6597f21e102d4a4330c299911789fa\": rpc error: code = NotFound desc = could not find container \"980cf38bbb307399ab974c621f524b4fee6597f21e102d4a4330c299911789fa\": container with ID starting with 980cf38bbb307399ab974c621f524b4fee6597f21e102d4a4330c299911789fa not found: ID does not exist"
	May 05 21:04:57 addons-476078 kubelet[1267]: I0505 21:04:57.149261    1267 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92b9cc6b-903c-41c2-9101-cc4acb08ee22" path="/var/lib/kubelet/pods/92b9cc6b-903c-41c2-9101-cc4acb08ee22/volumes"
	May 05 21:04:59 addons-476078 kubelet[1267]: I0505 21:04:59.095879    1267 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b78b376-c861-454a-a8b8-605ec897905a" path="/var/lib/kubelet/pods/7b78b376-c861-454a-a8b8-605ec897905a/volumes"
	May 05 21:04:59 addons-476078 kubelet[1267]: I0505 21:04:59.096357    1267 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc7d0976-ab25-4a24-959d-3f33884f6728" path="/var/lib/kubelet/pods/cc7d0976-ab25-4a24-959d-3f33884f6728/volumes"
	May 05 21:05:00 addons-476078 kubelet[1267]: I0505 21:05:00.935862    1267 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/227502bd-e496-449f-b163-a504cdde3568-webhook-cert\") pod \"227502bd-e496-449f-b163-a504cdde3568\" (UID: \"227502bd-e496-449f-b163-a504cdde3568\") "
	May 05 21:05:00 addons-476078 kubelet[1267]: I0505 21:05:00.935896    1267 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnqbt\" (UniqueName: \"kubernetes.io/projected/227502bd-e496-449f-b163-a504cdde3568-kube-api-access-nnqbt\") pod \"227502bd-e496-449f-b163-a504cdde3568\" (UID: \"227502bd-e496-449f-b163-a504cdde3568\") "
	May 05 21:05:00 addons-476078 kubelet[1267]: I0505 21:05:00.941905    1267 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/227502bd-e496-449f-b163-a504cdde3568-kube-api-access-nnqbt" (OuterVolumeSpecName: "kube-api-access-nnqbt") pod "227502bd-e496-449f-b163-a504cdde3568" (UID: "227502bd-e496-449f-b163-a504cdde3568"). InnerVolumeSpecName "kube-api-access-nnqbt". PluginName "kubernetes.io/projected", VolumeGidValue ""
	May 05 21:05:00 addons-476078 kubelet[1267]: I0505 21:05:00.949985    1267 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/227502bd-e496-449f-b163-a504cdde3568-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "227502bd-e496-449f-b163-a504cdde3568" (UID: "227502bd-e496-449f-b163-a504cdde3568"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	May 05 21:05:01 addons-476078 kubelet[1267]: I0505 21:05:01.037133    1267 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/227502bd-e496-449f-b163-a504cdde3568-webhook-cert\") on node \"addons-476078\" DevicePath \"\""
	May 05 21:05:01 addons-476078 kubelet[1267]: I0505 21:05:01.037169    1267 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-nnqbt\" (UniqueName: \"kubernetes.io/projected/227502bd-e496-449f-b163-a504cdde3568-kube-api-access-nnqbt\") on node \"addons-476078\" DevicePath \"\""
	May 05 21:05:01 addons-476078 kubelet[1267]: I0505 21:05:01.096014    1267 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="227502bd-e496-449f-b163-a504cdde3568" path="/var/lib/kubelet/pods/227502bd-e496-449f-b163-a504cdde3568/volumes"
	May 05 21:05:01 addons-476078 kubelet[1267]: I0505 21:05:01.541266    1267 scope.go:117] "RemoveContainer" containerID="de600e3b1de4b6e650877845aebd0ef6ce237fb4e2060c57061222341e386816"
	May 05 21:05:01 addons-476078 kubelet[1267]: I0505 21:05:01.561819    1267 scope.go:117] "RemoveContainer" containerID="de600e3b1de4b6e650877845aebd0ef6ce237fb4e2060c57061222341e386816"
	May 05 21:05:01 addons-476078 kubelet[1267]: E0505 21:05:01.562480    1267 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de600e3b1de4b6e650877845aebd0ef6ce237fb4e2060c57061222341e386816\": container with ID starting with de600e3b1de4b6e650877845aebd0ef6ce237fb4e2060c57061222341e386816 not found: ID does not exist" containerID="de600e3b1de4b6e650877845aebd0ef6ce237fb4e2060c57061222341e386816"
	May 05 21:05:01 addons-476078 kubelet[1267]: I0505 21:05:01.562532    1267 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de600e3b1de4b6e650877845aebd0ef6ce237fb4e2060c57061222341e386816"} err="failed to get container status \"de600e3b1de4b6e650877845aebd0ef6ce237fb4e2060c57061222341e386816\": rpc error: code = NotFound desc = could not find container \"de600e3b1de4b6e650877845aebd0ef6ce237fb4e2060c57061222341e386816\": container with ID starting with de600e3b1de4b6e650877845aebd0ef6ce237fb4e2060c57061222341e386816 not found: ID does not exist"
	May 05 21:05:03 addons-476078 kubelet[1267]: E0505 21:05:03.146518    1267 iptables.go:577] "Could not set up iptables canary" err=<
	May 05 21:05:03 addons-476078 kubelet[1267]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 05 21:05:03 addons-476078 kubelet[1267]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 05 21:05:03 addons-476078 kubelet[1267]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 05 21:05:03 addons-476078 kubelet[1267]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 05 21:05:04 addons-476078 kubelet[1267]: I0505 21:05:04.508817    1267 scope.go:117] "RemoveContainer" containerID="fd1c1a2f4f0bce290536c8709310300492909fa1d1d05e8a1c2770c8e382966e"
	May 05 21:05:04 addons-476078 kubelet[1267]: I0505 21:05:04.533730    1267 scope.go:117] "RemoveContainer" containerID="ba07ef0aee9a097f68533b37e783be89e0fbd2865d0a8be0eea00000654665a1"
	
	
	==> storage-provisioner [dd225ed77802f8083a3f863e503a7d3e8feb5443321fc208a3b8a0addb06f9a4] <==
	I0505 20:59:29.315804       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0505 20:59:29.344539       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0505 20:59:29.344704       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0505 20:59:29.361144       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0505 20:59:29.362948       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-476078_4e1d4da4-531d-4e79-a12c-3ea4818c1ceb!
	I0505 20:59:29.372099       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"420b0165-510a-4f68-93a0-80a2e3d822fd", APIVersion:"v1", ResourceVersion:"798", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-476078_4e1d4da4-531d-4e79-a12c-3ea4818c1ceb became leader
	I0505 20:59:29.464490       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-476078_4e1d4da4-531d-4e79-a12c-3ea4818c1ceb!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-476078 -n addons-476078
helpers_test.go:261: (dbg) Run:  kubectl --context addons-476078 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (153.65s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (344.81s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 3.755463ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-nsvl8" [8b3d4733-9d64-4587-9ed8-b33c78c6ccf0] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.005197121s
addons_test.go:417: (dbg) Run:  kubectl --context addons-476078 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-476078 top pods -n kube-system: exit status 1 (92.105346ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-gnhf4, age: 3m8.249188341s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-476078 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-476078 top pods -n kube-system: exit status 1 (76.49187ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-gnhf4, age: 3m12.10209427s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-476078 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-476078 top pods -n kube-system: exit status 1 (78.989686ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-gnhf4, age: 3m14.58715665s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-476078 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-476078 top pods -n kube-system: exit status 1 (66.600144ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-gnhf4, age: 3m20.425557181s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-476078 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-476078 top pods -n kube-system: exit status 1 (104.607692ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-gnhf4, age: 3m33.976466955s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-476078 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-476078 top pods -n kube-system: exit status 1 (65.201198ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-gnhf4, age: 3m50.083955365s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-476078 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-476078 top pods -n kube-system: exit status 1 (65.140958ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-gnhf4, age: 4m17.948258933s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-476078 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-476078 top pods -n kube-system: exit status 1 (61.671968ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-gnhf4, age: 4m35.669052063s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-476078 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-476078 top pods -n kube-system: exit status 1 (77.040534ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-gnhf4, age: 5m50.300447498s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-476078 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-476078 top pods -n kube-system: exit status 1 (64.663824ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-gnhf4, age: 7m17.440708426s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-476078 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-476078 top pods -n kube-system: exit status 1 (63.958657ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-gnhf4, age: 8m43.877572757s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-476078 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-476078 -n addons-476078
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-476078 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-476078 logs -n 25: (1.567677022s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.0 | 05 May 24 20:58 UTC | 05 May 24 20:58 UTC |
	| delete  | -p download-only-302864                                                                     | download-only-302864 | jenkins | v1.33.0 | 05 May 24 20:58 UTC | 05 May 24 20:58 UTC |
	| delete  | -p download-only-583025                                                                     | download-only-583025 | jenkins | v1.33.0 | 05 May 24 20:58 UTC | 05 May 24 20:58 UTC |
	| delete  | -p download-only-302864                                                                     | download-only-302864 | jenkins | v1.33.0 | 05 May 24 20:58 UTC | 05 May 24 20:58 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-490333 | jenkins | v1.33.0 | 05 May 24 20:58 UTC |                     |
	|         | binary-mirror-490333                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:42709                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-490333                                                                     | binary-mirror-490333 | jenkins | v1.33.0 | 05 May 24 20:58 UTC | 05 May 24 20:58 UTC |
	| addons  | disable dashboard -p                                                                        | addons-476078        | jenkins | v1.33.0 | 05 May 24 20:58 UTC |                     |
	|         | addons-476078                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-476078        | jenkins | v1.33.0 | 05 May 24 20:58 UTC |                     |
	|         | addons-476078                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-476078 --wait=true                                                                | addons-476078        | jenkins | v1.33.0 | 05 May 24 20:58 UTC | 05 May 24 21:01 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-476078        | jenkins | v1.33.0 | 05 May 24 21:01 UTC | 05 May 24 21:01 UTC |
	|         | -p addons-476078                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-476078        | jenkins | v1.33.0 | 05 May 24 21:01 UTC | 05 May 24 21:01 UTC |
	|         | -p addons-476078                                                                            |                      |         |         |                     |                     |
	| ssh     | addons-476078 ssh cat                                                                       | addons-476078        | jenkins | v1.33.0 | 05 May 24 21:02 UTC | 05 May 24 21:02 UTC |
	|         | /opt/local-path-provisioner/pvc-cbe9cb1d-6e41-4e52-b663-b8efdb599694_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-476078 addons disable                                                                | addons-476078        | jenkins | v1.33.0 | 05 May 24 21:02 UTC | 05 May 24 21:02 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-476078 ip                                                                            | addons-476078        | jenkins | v1.33.0 | 05 May 24 21:02 UTC | 05 May 24 21:02 UTC |
	| addons  | addons-476078 addons disable                                                                | addons-476078        | jenkins | v1.33.0 | 05 May 24 21:02 UTC | 05 May 24 21:02 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-476078        | jenkins | v1.33.0 | 05 May 24 21:02 UTC | 05 May 24 21:02 UTC |
	|         | addons-476078                                                                               |                      |         |         |                     |                     |
	| addons  | addons-476078 addons disable                                                                | addons-476078        | jenkins | v1.33.0 | 05 May 24 21:02 UTC | 05 May 24 21:02 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-476078        | jenkins | v1.33.0 | 05 May 24 21:02 UTC | 05 May 24 21:02 UTC |
	|         | addons-476078                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-476078 ssh curl -s                                                                   | addons-476078        | jenkins | v1.33.0 | 05 May 24 21:02 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-476078 addons                                                                        | addons-476078        | jenkins | v1.33.0 | 05 May 24 21:02 UTC | 05 May 24 21:02 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-476078 addons                                                                        | addons-476078        | jenkins | v1.33.0 | 05 May 24 21:02 UTC | 05 May 24 21:02 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-476078 ip                                                                            | addons-476078        | jenkins | v1.33.0 | 05 May 24 21:04 UTC | 05 May 24 21:04 UTC |
	| addons  | addons-476078 addons disable                                                                | addons-476078        | jenkins | v1.33.0 | 05 May 24 21:04 UTC | 05 May 24 21:04 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-476078 addons disable                                                                | addons-476078        | jenkins | v1.33.0 | 05 May 24 21:04 UTC | 05 May 24 21:05 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-476078 addons                                                                        | addons-476078        | jenkins | v1.33.0 | 05 May 24 21:07 UTC | 05 May 24 21:08 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/05 20:58:19
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0505 20:58:19.757317   19551 out.go:291] Setting OutFile to fd 1 ...
	I0505 20:58:19.757453   19551 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 20:58:19.757463   19551 out.go:304] Setting ErrFile to fd 2...
	I0505 20:58:19.757467   19551 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 20:58:19.757680   19551 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18602-11466/.minikube/bin
	I0505 20:58:19.758284   19551 out.go:298] Setting JSON to false
	I0505 20:58:19.759112   19551 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2447,"bootTime":1714940253,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0505 20:58:19.759168   19551 start.go:139] virtualization: kvm guest
	I0505 20:58:19.761428   19551 out.go:177] * [addons-476078] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0505 20:58:19.762736   19551 notify.go:220] Checking for updates...
	I0505 20:58:19.762748   19551 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 20:58:19.764237   19551 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 20:58:19.765796   19551 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18602-11466/kubeconfig
	I0505 20:58:19.767345   19551 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18602-11466/.minikube
	I0505 20:58:19.768946   19551 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0505 20:58:19.770404   19551 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 20:58:19.771876   19551 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 20:58:19.802911   19551 out.go:177] * Using the kvm2 driver based on user configuration
	I0505 20:58:19.804366   19551 start.go:297] selected driver: kvm2
	I0505 20:58:19.804387   19551 start.go:901] validating driver "kvm2" against <nil>
	I0505 20:58:19.804401   19551 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 20:58:19.805044   19551 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 20:58:19.805118   19551 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18602-11466/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0505 20:58:19.818711   19551 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0505 20:58:19.818757   19551 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0505 20:58:19.818950   19551 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0505 20:58:19.819014   19551 cni.go:84] Creating CNI manager for ""
	I0505 20:58:19.819033   19551 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0505 20:58:19.819046   19551 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0505 20:58:19.819120   19551 start.go:340] cluster config:
	{Name:addons-476078 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-476078 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 20:58:19.819221   19551 iso.go:125] acquiring lock: {Name:mk05a37c5afd8d748706c016c1665e3846f7161e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 20:58:19.821003   19551 out.go:177] * Starting "addons-476078" primary control-plane node in "addons-476078" cluster
	I0505 20:58:19.822273   19551 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0505 20:58:19.822310   19551 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18602-11466/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0505 20:58:19.822323   19551 cache.go:56] Caching tarball of preloaded images
	I0505 20:58:19.822397   19551 preload.go:173] Found /home/jenkins/minikube-integration/18602-11466/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0505 20:58:19.822410   19551 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0505 20:58:19.822703   19551 profile.go:143] Saving config to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/config.json ...
	I0505 20:58:19.822735   19551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/config.json: {Name:mkbb67ee823096213b7c142e1c0e129bcf056988 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 20:58:19.822865   19551 start.go:360] acquireMachinesLock for addons-476078: {Name:mk7f653ea73351d572a4896c6b37d2e7aa44b4ac Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 20:58:19.822925   19551 start.go:364] duration metric: took 43.716µs to acquireMachinesLock for "addons-476078"
	I0505 20:58:19.822948   19551 start.go:93] Provisioning new machine with config: &{Name:addons-476078 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:addons-476078 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0505 20:58:19.823025   19551 start.go:125] createHost starting for "" (driver="kvm2")
	I0505 20:58:19.825229   19551 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0505 20:58:19.825363   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:58:19.825413   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:58:19.838808   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45501
	I0505 20:58:19.839168   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:58:19.839681   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:58:19.839710   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:58:19.840023   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:58:19.840183   19551 main.go:141] libmachine: (addons-476078) Calling .GetMachineName
	I0505 20:58:19.840328   19551 main.go:141] libmachine: (addons-476078) Calling .DriverName
	I0505 20:58:19.840462   19551 start.go:159] libmachine.API.Create for "addons-476078" (driver="kvm2")
	I0505 20:58:19.840493   19551 client.go:168] LocalClient.Create starting
	I0505 20:58:19.840550   19551 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem
	I0505 20:58:19.888731   19551 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem
	I0505 20:58:20.060555   19551 main.go:141] libmachine: Running pre-create checks...
	I0505 20:58:20.060581   19551 main.go:141] libmachine: (addons-476078) Calling .PreCreateCheck
	I0505 20:58:20.061101   19551 main.go:141] libmachine: (addons-476078) Calling .GetConfigRaw
	I0505 20:58:20.061496   19551 main.go:141] libmachine: Creating machine...
	I0505 20:58:20.061513   19551 main.go:141] libmachine: (addons-476078) Calling .Create
	I0505 20:58:20.061654   19551 main.go:141] libmachine: (addons-476078) Creating KVM machine...
	I0505 20:58:20.062885   19551 main.go:141] libmachine: (addons-476078) DBG | found existing default KVM network
	I0505 20:58:20.063579   19551 main.go:141] libmachine: (addons-476078) DBG | I0505 20:58:20.063404   19573 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0505 20:58:20.063613   19551 main.go:141] libmachine: (addons-476078) DBG | created network xml: 
	I0505 20:58:20.063639   19551 main.go:141] libmachine: (addons-476078) DBG | <network>
	I0505 20:58:20.063650   19551 main.go:141] libmachine: (addons-476078) DBG |   <name>mk-addons-476078</name>
	I0505 20:58:20.063659   19551 main.go:141] libmachine: (addons-476078) DBG |   <dns enable='no'/>
	I0505 20:58:20.063666   19551 main.go:141] libmachine: (addons-476078) DBG |   
	I0505 20:58:20.063675   19551 main.go:141] libmachine: (addons-476078) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0505 20:58:20.063690   19551 main.go:141] libmachine: (addons-476078) DBG |     <dhcp>
	I0505 20:58:20.063721   19551 main.go:141] libmachine: (addons-476078) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0505 20:58:20.063744   19551 main.go:141] libmachine: (addons-476078) DBG |     </dhcp>
	I0505 20:58:20.063755   19551 main.go:141] libmachine: (addons-476078) DBG |   </ip>
	I0505 20:58:20.063766   19551 main.go:141] libmachine: (addons-476078) DBG |   
	I0505 20:58:20.063779   19551 main.go:141] libmachine: (addons-476078) DBG | </network>
	I0505 20:58:20.063787   19551 main.go:141] libmachine: (addons-476078) DBG | 
	I0505 20:58:20.069044   19551 main.go:141] libmachine: (addons-476078) DBG | trying to create private KVM network mk-addons-476078 192.168.39.0/24...
	I0505 20:58:20.133558   19551 main.go:141] libmachine: (addons-476078) DBG | private KVM network mk-addons-476078 192.168.39.0/24 created
	I0505 20:58:20.133583   19551 main.go:141] libmachine: (addons-476078) Setting up store path in /home/jenkins/minikube-integration/18602-11466/.minikube/machines/addons-476078 ...
	I0505 20:58:20.133615   19551 main.go:141] libmachine: (addons-476078) DBG | I0505 20:58:20.133521   19573 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18602-11466/.minikube
	I0505 20:58:20.133637   19551 main.go:141] libmachine: (addons-476078) Building disk image from file:///home/jenkins/minikube-integration/18602-11466/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso
	I0505 20:58:20.133656   19551 main.go:141] libmachine: (addons-476078) Downloading /home/jenkins/minikube-integration/18602-11466/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18602-11466/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso...
	I0505 20:58:20.373568   19551 main.go:141] libmachine: (addons-476078) DBG | I0505 20:58:20.373369   19573 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/addons-476078/id_rsa...
	I0505 20:58:20.505595   19551 main.go:141] libmachine: (addons-476078) DBG | I0505 20:58:20.505434   19573 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/addons-476078/addons-476078.rawdisk...
	I0505 20:58:20.505635   19551 main.go:141] libmachine: (addons-476078) DBG | Writing magic tar header
	I0505 20:58:20.505653   19551 main.go:141] libmachine: (addons-476078) DBG | Writing SSH key tar header
	I0505 20:58:20.505665   19551 main.go:141] libmachine: (addons-476078) DBG | I0505 20:58:20.505590   19573 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18602-11466/.minikube/machines/addons-476078 ...
	I0505 20:58:20.505765   19551 main.go:141] libmachine: (addons-476078) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/addons-476078
	I0505 20:58:20.505785   19551 main.go:141] libmachine: (addons-476078) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18602-11466/.minikube/machines
	I0505 20:58:20.505809   19551 main.go:141] libmachine: (addons-476078) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18602-11466/.minikube
	I0505 20:58:20.505824   19551 main.go:141] libmachine: (addons-476078) Setting executable bit set on /home/jenkins/minikube-integration/18602-11466/.minikube/machines/addons-476078 (perms=drwx------)
	I0505 20:58:20.505833   19551 main.go:141] libmachine: (addons-476078) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18602-11466
	I0505 20:58:20.505845   19551 main.go:141] libmachine: (addons-476078) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0505 20:58:20.505852   19551 main.go:141] libmachine: (addons-476078) DBG | Checking permissions on dir: /home/jenkins
	I0505 20:58:20.505860   19551 main.go:141] libmachine: (addons-476078) DBG | Checking permissions on dir: /home
	I0505 20:58:20.505867   19551 main.go:141] libmachine: (addons-476078) DBG | Skipping /home - not owner
	I0505 20:58:20.505886   19551 main.go:141] libmachine: (addons-476078) Setting executable bit set on /home/jenkins/minikube-integration/18602-11466/.minikube/machines (perms=drwxr-xr-x)
	I0505 20:58:20.505897   19551 main.go:141] libmachine: (addons-476078) Setting executable bit set on /home/jenkins/minikube-integration/18602-11466/.minikube (perms=drwxr-xr-x)
	I0505 20:58:20.505949   19551 main.go:141] libmachine: (addons-476078) Setting executable bit set on /home/jenkins/minikube-integration/18602-11466 (perms=drwxrwxr-x)
	I0505 20:58:20.505996   19551 main.go:141] libmachine: (addons-476078) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0505 20:58:20.506019   19551 main.go:141] libmachine: (addons-476078) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0505 20:58:20.506037   19551 main.go:141] libmachine: (addons-476078) Creating domain...
	I0505 20:58:20.507040   19551 main.go:141] libmachine: (addons-476078) define libvirt domain using xml: 
	I0505 20:58:20.507069   19551 main.go:141] libmachine: (addons-476078) <domain type='kvm'>
	I0505 20:58:20.507081   19551 main.go:141] libmachine: (addons-476078)   <name>addons-476078</name>
	I0505 20:58:20.507093   19551 main.go:141] libmachine: (addons-476078)   <memory unit='MiB'>4000</memory>
	I0505 20:58:20.507103   19551 main.go:141] libmachine: (addons-476078)   <vcpu>2</vcpu>
	I0505 20:58:20.507114   19551 main.go:141] libmachine: (addons-476078)   <features>
	I0505 20:58:20.507123   19551 main.go:141] libmachine: (addons-476078)     <acpi/>
	I0505 20:58:20.507133   19551 main.go:141] libmachine: (addons-476078)     <apic/>
	I0505 20:58:20.507142   19551 main.go:141] libmachine: (addons-476078)     <pae/>
	I0505 20:58:20.507152   19551 main.go:141] libmachine: (addons-476078)     
	I0505 20:58:20.507161   19551 main.go:141] libmachine: (addons-476078)   </features>
	I0505 20:58:20.507177   19551 main.go:141] libmachine: (addons-476078)   <cpu mode='host-passthrough'>
	I0505 20:58:20.507188   19551 main.go:141] libmachine: (addons-476078)   
	I0505 20:58:20.507203   19551 main.go:141] libmachine: (addons-476078)   </cpu>
	I0505 20:58:20.507216   19551 main.go:141] libmachine: (addons-476078)   <os>
	I0505 20:58:20.507225   19551 main.go:141] libmachine: (addons-476078)     <type>hvm</type>
	I0505 20:58:20.507236   19551 main.go:141] libmachine: (addons-476078)     <boot dev='cdrom'/>
	I0505 20:58:20.507244   19551 main.go:141] libmachine: (addons-476078)     <boot dev='hd'/>
	I0505 20:58:20.507267   19551 main.go:141] libmachine: (addons-476078)     <bootmenu enable='no'/>
	I0505 20:58:20.507292   19551 main.go:141] libmachine: (addons-476078)   </os>
	I0505 20:58:20.507298   19551 main.go:141] libmachine: (addons-476078)   <devices>
	I0505 20:58:20.507308   19551 main.go:141] libmachine: (addons-476078)     <disk type='file' device='cdrom'>
	I0505 20:58:20.507329   19551 main.go:141] libmachine: (addons-476078)       <source file='/home/jenkins/minikube-integration/18602-11466/.minikube/machines/addons-476078/boot2docker.iso'/>
	I0505 20:58:20.507338   19551 main.go:141] libmachine: (addons-476078)       <target dev='hdc' bus='scsi'/>
	I0505 20:58:20.507365   19551 main.go:141] libmachine: (addons-476078)       <readonly/>
	I0505 20:58:20.507382   19551 main.go:141] libmachine: (addons-476078)     </disk>
	I0505 20:58:20.507396   19551 main.go:141] libmachine: (addons-476078)     <disk type='file' device='disk'>
	I0505 20:58:20.507410   19551 main.go:141] libmachine: (addons-476078)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0505 20:58:20.507431   19551 main.go:141] libmachine: (addons-476078)       <source file='/home/jenkins/minikube-integration/18602-11466/.minikube/machines/addons-476078/addons-476078.rawdisk'/>
	I0505 20:58:20.507442   19551 main.go:141] libmachine: (addons-476078)       <target dev='hda' bus='virtio'/>
	I0505 20:58:20.507449   19551 main.go:141] libmachine: (addons-476078)     </disk>
	I0505 20:58:20.507457   19551 main.go:141] libmachine: (addons-476078)     <interface type='network'>
	I0505 20:58:20.507464   19551 main.go:141] libmachine: (addons-476078)       <source network='mk-addons-476078'/>
	I0505 20:58:20.507471   19551 main.go:141] libmachine: (addons-476078)       <model type='virtio'/>
	I0505 20:58:20.507493   19551 main.go:141] libmachine: (addons-476078)     </interface>
	I0505 20:58:20.507503   19551 main.go:141] libmachine: (addons-476078)     <interface type='network'>
	I0505 20:58:20.507509   19551 main.go:141] libmachine: (addons-476078)       <source network='default'/>
	I0505 20:58:20.507517   19551 main.go:141] libmachine: (addons-476078)       <model type='virtio'/>
	I0505 20:58:20.507523   19551 main.go:141] libmachine: (addons-476078)     </interface>
	I0505 20:58:20.507535   19551 main.go:141] libmachine: (addons-476078)     <serial type='pty'>
	I0505 20:58:20.507542   19551 main.go:141] libmachine: (addons-476078)       <target port='0'/>
	I0505 20:58:20.507549   19551 main.go:141] libmachine: (addons-476078)     </serial>
	I0505 20:58:20.507570   19551 main.go:141] libmachine: (addons-476078)     <console type='pty'>
	I0505 20:58:20.507587   19551 main.go:141] libmachine: (addons-476078)       <target type='serial' port='0'/>
	I0505 20:58:20.507596   19551 main.go:141] libmachine: (addons-476078)     </console>
	I0505 20:58:20.507602   19551 main.go:141] libmachine: (addons-476078)     <rng model='virtio'>
	I0505 20:58:20.507608   19551 main.go:141] libmachine: (addons-476078)       <backend model='random'>/dev/random</backend>
	I0505 20:58:20.507616   19551 main.go:141] libmachine: (addons-476078)     </rng>
	I0505 20:58:20.507621   19551 main.go:141] libmachine: (addons-476078)     
	I0505 20:58:20.507635   19551 main.go:141] libmachine: (addons-476078)     
	I0505 20:58:20.507644   19551 main.go:141] libmachine: (addons-476078)   </devices>
	I0505 20:58:20.507652   19551 main.go:141] libmachine: (addons-476078) </domain>
	I0505 20:58:20.507659   19551 main.go:141] libmachine: (addons-476078) 
	I0505 20:58:20.513326   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:40:b2:3f in network default
	I0505 20:58:20.513914   19551 main.go:141] libmachine: (addons-476078) Ensuring networks are active...
	I0505 20:58:20.513932   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:20.514577   19551 main.go:141] libmachine: (addons-476078) Ensuring network default is active
	I0505 20:58:20.514835   19551 main.go:141] libmachine: (addons-476078) Ensuring network mk-addons-476078 is active
	I0505 20:58:20.515297   19551 main.go:141] libmachine: (addons-476078) Getting domain xml...
	I0505 20:58:20.515903   19551 main.go:141] libmachine: (addons-476078) Creating domain...
	I0505 20:58:21.894308   19551 main.go:141] libmachine: (addons-476078) Waiting to get IP...
	I0505 20:58:21.895004   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:21.895495   19551 main.go:141] libmachine: (addons-476078) DBG | unable to find current IP address of domain addons-476078 in network mk-addons-476078
	I0505 20:58:21.895525   19551 main.go:141] libmachine: (addons-476078) DBG | I0505 20:58:21.895455   19573 retry.go:31] will retry after 294.594849ms: waiting for machine to come up
	I0505 20:58:22.192385   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:22.192942   19551 main.go:141] libmachine: (addons-476078) DBG | unable to find current IP address of domain addons-476078 in network mk-addons-476078
	I0505 20:58:22.192971   19551 main.go:141] libmachine: (addons-476078) DBG | I0505 20:58:22.192888   19573 retry.go:31] will retry after 342.366044ms: waiting for machine to come up
	I0505 20:58:22.536486   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:22.536948   19551 main.go:141] libmachine: (addons-476078) DBG | unable to find current IP address of domain addons-476078 in network mk-addons-476078
	I0505 20:58:22.536978   19551 main.go:141] libmachine: (addons-476078) DBG | I0505 20:58:22.536901   19573 retry.go:31] will retry after 462.108476ms: waiting for machine to come up
	I0505 20:58:23.000473   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:23.000925   19551 main.go:141] libmachine: (addons-476078) DBG | unable to find current IP address of domain addons-476078 in network mk-addons-476078
	I0505 20:58:23.000955   19551 main.go:141] libmachine: (addons-476078) DBG | I0505 20:58:23.000868   19573 retry.go:31] will retry after 531.892809ms: waiting for machine to come up
	I0505 20:58:23.534681   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:23.535139   19551 main.go:141] libmachine: (addons-476078) DBG | unable to find current IP address of domain addons-476078 in network mk-addons-476078
	I0505 20:58:23.535165   19551 main.go:141] libmachine: (addons-476078) DBG | I0505 20:58:23.535106   19573 retry.go:31] will retry after 483.047428ms: waiting for machine to come up
	I0505 20:58:24.019852   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:24.020332   19551 main.go:141] libmachine: (addons-476078) DBG | unable to find current IP address of domain addons-476078 in network mk-addons-476078
	I0505 20:58:24.020370   19551 main.go:141] libmachine: (addons-476078) DBG | I0505 20:58:24.020240   19573 retry.go:31] will retry after 707.426774ms: waiting for machine to come up
	I0505 20:58:24.730699   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:24.731059   19551 main.go:141] libmachine: (addons-476078) DBG | unable to find current IP address of domain addons-476078 in network mk-addons-476078
	I0505 20:58:24.731084   19551 main.go:141] libmachine: (addons-476078) DBG | I0505 20:58:24.731007   19573 retry.go:31] will retry after 832.935037ms: waiting for machine to come up
	I0505 20:58:25.565836   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:25.566268   19551 main.go:141] libmachine: (addons-476078) DBG | unable to find current IP address of domain addons-476078 in network mk-addons-476078
	I0505 20:58:25.566297   19551 main.go:141] libmachine: (addons-476078) DBG | I0505 20:58:25.566222   19573 retry.go:31] will retry after 1.413947965s: waiting for machine to come up
	I0505 20:58:26.981758   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:26.982232   19551 main.go:141] libmachine: (addons-476078) DBG | unable to find current IP address of domain addons-476078 in network mk-addons-476078
	I0505 20:58:26.982258   19551 main.go:141] libmachine: (addons-476078) DBG | I0505 20:58:26.982185   19573 retry.go:31] will retry after 1.825001378s: waiting for machine to come up
	I0505 20:58:28.809609   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:28.810255   19551 main.go:141] libmachine: (addons-476078) DBG | unable to find current IP address of domain addons-476078 in network mk-addons-476078
	I0505 20:58:28.810285   19551 main.go:141] libmachine: (addons-476078) DBG | I0505 20:58:28.810215   19573 retry.go:31] will retry after 1.881229823s: waiting for machine to come up
	I0505 20:58:30.693320   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:30.693813   19551 main.go:141] libmachine: (addons-476078) DBG | unable to find current IP address of domain addons-476078 in network mk-addons-476078
	I0505 20:58:30.693844   19551 main.go:141] libmachine: (addons-476078) DBG | I0505 20:58:30.693750   19573 retry.go:31] will retry after 2.591326187s: waiting for machine to come up
	I0505 20:58:33.286251   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:33.286563   19551 main.go:141] libmachine: (addons-476078) DBG | unable to find current IP address of domain addons-476078 in network mk-addons-476078
	I0505 20:58:33.286593   19551 main.go:141] libmachine: (addons-476078) DBG | I0505 20:58:33.286505   19573 retry.go:31] will retry after 3.368249883s: waiting for machine to come up
	I0505 20:58:36.657463   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:36.657799   19551 main.go:141] libmachine: (addons-476078) DBG | unable to find current IP address of domain addons-476078 in network mk-addons-476078
	I0505 20:58:36.657821   19551 main.go:141] libmachine: (addons-476078) DBG | I0505 20:58:36.657745   19573 retry.go:31] will retry after 4.19015471s: waiting for machine to come up
	I0505 20:58:40.850037   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:40.850494   19551 main.go:141] libmachine: (addons-476078) DBG | unable to find current IP address of domain addons-476078 in network mk-addons-476078
	I0505 20:58:40.850516   19551 main.go:141] libmachine: (addons-476078) DBG | I0505 20:58:40.850447   19573 retry.go:31] will retry after 3.963765257s: waiting for machine to come up
	I0505 20:58:44.818526   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:44.819001   19551 main.go:141] libmachine: (addons-476078) Found IP for machine: 192.168.39.102
	I0505 20:58:44.819031   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has current primary IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:44.819040   19551 main.go:141] libmachine: (addons-476078) Reserving static IP address...
	I0505 20:58:44.819406   19551 main.go:141] libmachine: (addons-476078) DBG | unable to find host DHCP lease matching {name: "addons-476078", mac: "52:54:00:48:a4:72", ip: "192.168.39.102"} in network mk-addons-476078
	I0505 20:58:44.886209   19551 main.go:141] libmachine: (addons-476078) Reserved static IP address: 192.168.39.102
	I0505 20:58:44.886231   19551 main.go:141] libmachine: (addons-476078) Waiting for SSH to be available...
	I0505 20:58:44.886242   19551 main.go:141] libmachine: (addons-476078) DBG | Getting to WaitForSSH function...
	I0505 20:58:44.888711   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:44.889178   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:minikube Clientid:01:52:54:00:48:a4:72}
	I0505 20:58:44.889207   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:44.889424   19551 main.go:141] libmachine: (addons-476078) DBG | Using SSH client type: external
	I0505 20:58:44.889454   19551 main.go:141] libmachine: (addons-476078) DBG | Using SSH private key: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/addons-476078/id_rsa (-rw-------)
	I0505 20:58:44.889500   19551 main.go:141] libmachine: (addons-476078) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.102 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18602-11466/.minikube/machines/addons-476078/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0505 20:58:44.889516   19551 main.go:141] libmachine: (addons-476078) DBG | About to run SSH command:
	I0505 20:58:44.889533   19551 main.go:141] libmachine: (addons-476078) DBG | exit 0
	I0505 20:58:45.024268   19551 main.go:141] libmachine: (addons-476078) DBG | SSH cmd err, output: <nil>: 
	I0505 20:58:45.024562   19551 main.go:141] libmachine: (addons-476078) KVM machine creation complete!
	I0505 20:58:45.024896   19551 main.go:141] libmachine: (addons-476078) Calling .GetConfigRaw
	I0505 20:58:45.025418   19551 main.go:141] libmachine: (addons-476078) Calling .DriverName
	I0505 20:58:45.025600   19551 main.go:141] libmachine: (addons-476078) Calling .DriverName
	I0505 20:58:45.025796   19551 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0505 20:58:45.025811   19551 main.go:141] libmachine: (addons-476078) Calling .GetState
	I0505 20:58:45.027072   19551 main.go:141] libmachine: Detecting operating system of created instance...
	I0505 20:58:45.027091   19551 main.go:141] libmachine: Waiting for SSH to be available...
	I0505 20:58:45.027099   19551 main.go:141] libmachine: Getting to WaitForSSH function...
	I0505 20:58:45.027107   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHHostname
	I0505 20:58:45.029206   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:45.029534   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:58:45.029566   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:45.029695   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHPort
	I0505 20:58:45.029871   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:58:45.030021   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:58:45.030161   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHUsername
	I0505 20:58:45.030322   19551 main.go:141] libmachine: Using SSH client type: native
	I0505 20:58:45.030484   19551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0505 20:58:45.030494   19551 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0505 20:58:45.143208   19551 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0505 20:58:45.143231   19551 main.go:141] libmachine: Detecting the provisioner...
	I0505 20:58:45.143241   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHHostname
	I0505 20:58:45.146058   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:45.146469   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:58:45.146505   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:45.146631   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHPort
	I0505 20:58:45.146839   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:58:45.147022   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:58:45.147171   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHUsername
	I0505 20:58:45.147319   19551 main.go:141] libmachine: Using SSH client type: native
	I0505 20:58:45.147469   19551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0505 20:58:45.147495   19551 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0505 20:58:45.260971   19551 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0505 20:58:45.261048   19551 main.go:141] libmachine: found compatible host: buildroot
	I0505 20:58:45.261062   19551 main.go:141] libmachine: Provisioning with buildroot...
	I0505 20:58:45.261074   19551 main.go:141] libmachine: (addons-476078) Calling .GetMachineName
	I0505 20:58:45.261401   19551 buildroot.go:166] provisioning hostname "addons-476078"
	I0505 20:58:45.261429   19551 main.go:141] libmachine: (addons-476078) Calling .GetMachineName
	I0505 20:58:45.261587   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHHostname
	I0505 20:58:45.264079   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:45.264450   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:58:45.264477   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:45.264629   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHPort
	I0505 20:58:45.264792   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:58:45.264961   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:58:45.265120   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHUsername
	I0505 20:58:45.265285   19551 main.go:141] libmachine: Using SSH client type: native
	I0505 20:58:45.265441   19551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0505 20:58:45.265454   19551 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-476078 && echo "addons-476078" | sudo tee /etc/hostname
	I0505 20:58:45.395472   19551 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-476078
	
	I0505 20:58:45.395515   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHHostname
	I0505 20:58:45.398148   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:45.398457   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:58:45.398479   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:45.398663   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHPort
	I0505 20:58:45.398881   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:58:45.399046   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:58:45.399187   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHUsername
	I0505 20:58:45.399347   19551 main.go:141] libmachine: Using SSH client type: native
	I0505 20:58:45.399562   19551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0505 20:58:45.399584   19551 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-476078' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-476078/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-476078' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0505 20:58:45.522547   19551 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0505 20:58:45.522584   19551 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18602-11466/.minikube CaCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18602-11466/.minikube}
	I0505 20:58:45.522611   19551 buildroot.go:174] setting up certificates
	I0505 20:58:45.522629   19551 provision.go:84] configureAuth start
	I0505 20:58:45.522647   19551 main.go:141] libmachine: (addons-476078) Calling .GetMachineName
	I0505 20:58:45.522949   19551 main.go:141] libmachine: (addons-476078) Calling .GetIP
	I0505 20:58:45.525565   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:45.525878   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:58:45.525906   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:45.526046   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHHostname
	I0505 20:58:45.528307   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:45.528663   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:58:45.528692   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:45.528810   19551 provision.go:143] copyHostCerts
	I0505 20:58:45.528900   19551 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem (1123 bytes)
	I0505 20:58:45.529061   19551 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem (1675 bytes)
	I0505 20:58:45.529151   19551 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem (1078 bytes)
	I0505 20:58:45.529234   19551 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem org=jenkins.addons-476078 san=[127.0.0.1 192.168.39.102 addons-476078 localhost minikube]
	I0505 20:58:45.659193   19551 provision.go:177] copyRemoteCerts
	I0505 20:58:45.659265   19551 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0505 20:58:45.659292   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHHostname
	I0505 20:58:45.661779   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:45.662078   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:58:45.662104   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:45.662332   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHPort
	I0505 20:58:45.662518   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:58:45.662674   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHUsername
	I0505 20:58:45.662764   19551 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/addons-476078/id_rsa Username:docker}
	I0505 20:58:45.750236   19551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0505 20:58:45.777175   19551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0505 20:58:45.803498   19551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0505 20:58:45.829258   19551 provision.go:87] duration metric: took 306.614751ms to configureAuth
	I0505 20:58:45.829282   19551 buildroot.go:189] setting minikube options for container-runtime
	I0505 20:58:45.829482   19551 config.go:182] Loaded profile config "addons-476078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 20:58:45.829565   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHHostname
	I0505 20:58:45.832064   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:45.832455   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:58:45.832524   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:45.832710   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHPort
	I0505 20:58:45.832907   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:58:45.833067   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:58:45.833212   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHUsername
	I0505 20:58:45.833366   19551 main.go:141] libmachine: Using SSH client type: native
	I0505 20:58:45.833522   19551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0505 20:58:45.833536   19551 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0505 20:58:46.114998   19551 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0505 20:58:46.115028   19551 main.go:141] libmachine: Checking connection to Docker...
	I0505 20:58:46.115035   19551 main.go:141] libmachine: (addons-476078) Calling .GetURL
	I0505 20:58:46.116350   19551 main.go:141] libmachine: (addons-476078) DBG | Using libvirt version 6000000
	I0505 20:58:46.118448   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:46.118735   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:58:46.118767   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:46.118889   19551 main.go:141] libmachine: Docker is up and running!
	I0505 20:58:46.118909   19551 main.go:141] libmachine: Reticulating splines...
	I0505 20:58:46.118918   19551 client.go:171] duration metric: took 26.278413629s to LocalClient.Create
	I0505 20:58:46.118942   19551 start.go:167] duration metric: took 26.278480373s to libmachine.API.Create "addons-476078"
	I0505 20:58:46.118959   19551 start.go:293] postStartSetup for "addons-476078" (driver="kvm2")
	I0505 20:58:46.118978   19551 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0505 20:58:46.118998   19551 main.go:141] libmachine: (addons-476078) Calling .DriverName
	I0505 20:58:46.119244   19551 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0505 20:58:46.119265   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHHostname
	I0505 20:58:46.121121   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:46.121390   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:58:46.121430   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:46.121544   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHPort
	I0505 20:58:46.121724   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:58:46.121903   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHUsername
	I0505 20:58:46.122026   19551 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/addons-476078/id_rsa Username:docker}
	I0505 20:58:46.211729   19551 ssh_runner.go:195] Run: cat /etc/os-release
	I0505 20:58:46.216656   19551 info.go:137] Remote host: Buildroot 2023.02.9
	I0505 20:58:46.216677   19551 filesync.go:126] Scanning /home/jenkins/minikube-integration/18602-11466/.minikube/addons for local assets ...
	I0505 20:58:46.216743   19551 filesync.go:126] Scanning /home/jenkins/minikube-integration/18602-11466/.minikube/files for local assets ...
	I0505 20:58:46.216766   19551 start.go:296] duration metric: took 97.798979ms for postStartSetup
	I0505 20:58:46.216797   19551 main.go:141] libmachine: (addons-476078) Calling .GetConfigRaw
	I0505 20:58:46.217321   19551 main.go:141] libmachine: (addons-476078) Calling .GetIP
	I0505 20:58:46.219994   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:46.220327   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:58:46.220357   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:46.220525   19551 profile.go:143] Saving config to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/config.json ...
	I0505 20:58:46.220717   19551 start.go:128] duration metric: took 26.397680863s to createHost
	I0505 20:58:46.220741   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHHostname
	I0505 20:58:46.222813   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:46.223117   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:58:46.223151   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:46.223267   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHPort
	I0505 20:58:46.223445   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:58:46.223575   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:58:46.223713   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHUsername
	I0505 20:58:46.223901   19551 main.go:141] libmachine: Using SSH client type: native
	I0505 20:58:46.224061   19551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0505 20:58:46.224072   19551 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0505 20:58:46.336892   19551 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714942726.298577736
	
	I0505 20:58:46.336915   19551 fix.go:216] guest clock: 1714942726.298577736
	I0505 20:58:46.336924   19551 fix.go:229] Guest: 2024-05-05 20:58:46.298577736 +0000 UTC Remote: 2024-05-05 20:58:46.220732058 +0000 UTC m=+26.508674640 (delta=77.845678ms)
	I0505 20:58:46.336947   19551 fix.go:200] guest clock delta is within tolerance: 77.845678ms
	I0505 20:58:46.336954   19551 start.go:83] releasing machines lock for "addons-476078", held for 26.514017864s
	I0505 20:58:46.336980   19551 main.go:141] libmachine: (addons-476078) Calling .DriverName
	I0505 20:58:46.337313   19551 main.go:141] libmachine: (addons-476078) Calling .GetIP
	I0505 20:58:46.340294   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:46.340652   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:58:46.340676   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:46.340835   19551 main.go:141] libmachine: (addons-476078) Calling .DriverName
	I0505 20:58:46.341330   19551 main.go:141] libmachine: (addons-476078) Calling .DriverName
	I0505 20:58:46.341534   19551 main.go:141] libmachine: (addons-476078) Calling .DriverName
	I0505 20:58:46.341618   19551 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0505 20:58:46.341675   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHHostname
	I0505 20:58:46.341725   19551 ssh_runner.go:195] Run: cat /version.json
	I0505 20:58:46.341758   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHHostname
	I0505 20:58:46.344293   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:46.344551   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:58:46.344583   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:46.344702   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:46.344741   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHPort
	I0505 20:58:46.344898   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:58:46.345073   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHUsername
	I0505 20:58:46.345095   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:58:46.345116   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:46.345235   19551 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/addons-476078/id_rsa Username:docker}
	I0505 20:58:46.345323   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHPort
	I0505 20:58:46.345458   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:58:46.345583   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHUsername
	I0505 20:58:46.345729   19551 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/addons-476078/id_rsa Username:docker}
	I0505 20:58:46.453060   19551 ssh_runner.go:195] Run: systemctl --version
	I0505 20:58:46.460876   19551 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0505 20:58:46.637888   19551 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0505 20:58:46.645668   19551 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0505 20:58:46.645732   19551 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0505 20:58:46.662465   19551 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0505 20:58:46.662485   19551 start.go:494] detecting cgroup driver to use...
	I0505 20:58:46.662542   19551 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0505 20:58:46.679392   19551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0505 20:58:46.694801   19551 docker.go:217] disabling cri-docker service (if available) ...
	I0505 20:58:46.694860   19551 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0505 20:58:46.710029   19551 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0505 20:58:46.725451   19551 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0505 20:58:46.846970   19551 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0505 20:58:47.006823   19551 docker.go:233] disabling docker service ...
	I0505 20:58:47.006900   19551 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0505 20:58:47.023437   19551 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0505 20:58:47.037886   19551 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0505 20:58:47.181829   19551 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0505 20:58:47.316772   19551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0505 20:58:47.332048   19551 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0505 20:58:47.352783   19551 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0505 20:58:47.352874   19551 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 20:58:47.364822   19551 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0505 20:58:47.364877   19551 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 20:58:47.376714   19551 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 20:58:47.388959   19551 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 20:58:47.400801   19551 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0505 20:58:47.413168   19551 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 20:58:47.424825   19551 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 20:58:47.443315   19551 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 20:58:47.455948   19551 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0505 20:58:47.467082   19551 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0505 20:58:47.467143   19551 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0505 20:58:47.481949   19551 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0505 20:58:47.492713   19551 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 20:58:47.623992   19551 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0505 20:58:47.766117   19551 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0505 20:58:47.766210   19551 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0505 20:58:47.772258   19551 start.go:562] Will wait 60s for crictl version
	I0505 20:58:47.772327   19551 ssh_runner.go:195] Run: which crictl
	I0505 20:58:47.776548   19551 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0505 20:58:47.817373   19551 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0505 20:58:47.817498   19551 ssh_runner.go:195] Run: crio --version
	I0505 20:58:47.847485   19551 ssh_runner.go:195] Run: crio --version
	I0505 20:58:47.881780   19551 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0505 20:58:47.883282   19551 main.go:141] libmachine: (addons-476078) Calling .GetIP
	I0505 20:58:47.886092   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:47.886404   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:58:47.886436   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:58:47.886659   19551 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0505 20:58:47.891519   19551 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0505 20:58:47.906291   19551 kubeadm.go:877] updating cluster {Name:addons-476078 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
0 ClusterName:addons-476078 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0505 20:58:47.906407   19551 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0505 20:58:47.906447   19551 ssh_runner.go:195] Run: sudo crictl images --output json
	I0505 20:58:47.941686   19551 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0505 20:58:47.941752   19551 ssh_runner.go:195] Run: which lz4
	I0505 20:58:47.946270   19551 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0505 20:58:47.950885   19551 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0505 20:58:47.950917   19551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0505 20:58:49.512139   19551 crio.go:462] duration metric: took 1.565902638s to copy over tarball
	I0505 20:58:49.512224   19551 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0505 20:58:52.068891   19551 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.556635119s)
	I0505 20:58:52.068924   19551 crio.go:469] duration metric: took 2.556753797s to extract the tarball
	I0505 20:58:52.068937   19551 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0505 20:58:52.108491   19551 ssh_runner.go:195] Run: sudo crictl images --output json
	I0505 20:58:52.155406   19551 crio.go:514] all images are preloaded for cri-o runtime.
	I0505 20:58:52.155437   19551 cache_images.go:84] Images are preloaded, skipping loading
	I0505 20:58:52.155447   19551 kubeadm.go:928] updating node { 192.168.39.102 8443 v1.30.0 crio true true} ...
	I0505 20:58:52.155579   19551 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-476078 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.102
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:addons-476078 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0505 20:58:52.155642   19551 ssh_runner.go:195] Run: crio config
	I0505 20:58:52.202989   19551 cni.go:84] Creating CNI manager for ""
	I0505 20:58:52.203008   19551 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0505 20:58:52.203019   19551 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0505 20:58:52.203038   19551 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.102 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-476078 NodeName:addons-476078 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.102"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.102 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0505 20:58:52.203165   19551 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.102
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-476078"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.102
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.102"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0505 20:58:52.203223   19551 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0505 20:58:52.213721   19551 binaries.go:44] Found k8s binaries, skipping transfer
	I0505 20:58:52.213795   19551 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0505 20:58:52.223423   19551 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0505 20:58:52.241377   19551 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0505 20:58:52.259367   19551 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0505 20:58:52.277579   19551 ssh_runner.go:195] Run: grep 192.168.39.102	control-plane.minikube.internal$ /etc/hosts
	I0505 20:58:52.281843   19551 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.102	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0505 20:58:52.294963   19551 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 20:58:52.417648   19551 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0505 20:58:52.434892   19551 certs.go:68] Setting up /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078 for IP: 192.168.39.102
	I0505 20:58:52.434912   19551 certs.go:194] generating shared ca certs ...
	I0505 20:58:52.434934   19551 certs.go:226] acquiring lock for ca certs: {Name:mk9a8f97724855697074b08194c6247f8f9e5c84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 20:58:52.435079   19551 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key
	I0505 20:58:52.555665   19551 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt ...
	I0505 20:58:52.555693   19551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt: {Name:mke0edbd56f4a544e61431caa27ba4d5ab06e9ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 20:58:52.555845   19551 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key ...
	I0505 20:58:52.555856   19551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key: {Name:mkfcd1b8ff14190bc149d6ff4e622064f68787ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 20:58:52.555920   19551 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key
	I0505 20:58:52.655889   19551 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.crt ...
	I0505 20:58:52.655917   19551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.crt: {Name:mk1f26915abb39dda57f3a5f42e923d93c16b588 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 20:58:52.656059   19551 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key ...
	I0505 20:58:52.656072   19551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key: {Name:mkedd440eedb133e50e3b3b00ea464a51e3ea7ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 20:58:52.656131   19551 certs.go:256] generating profile certs ...
	I0505 20:58:52.656201   19551 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/client.key
	I0505 20:58:52.656223   19551 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/client.crt with IP's: []
	I0505 20:58:52.734141   19551 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/client.crt ...
	I0505 20:58:52.734172   19551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/client.crt: {Name:mk906155bf9b2932840b4dde633971c6458e573f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 20:58:52.734338   19551 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/client.key ...
	I0505 20:58:52.734352   19551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/client.key: {Name:mk0b92a84e45934a4771366a8efb554eb3f13ebe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 20:58:52.734449   19551 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/apiserver.key.2bbb6ab6
	I0505 20:58:52.734472   19551 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/apiserver.crt.2bbb6ab6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.102]
	I0505 20:58:52.787920   19551 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/apiserver.crt.2bbb6ab6 ...
	I0505 20:58:52.787950   19551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/apiserver.crt.2bbb6ab6: {Name:mkecd1630b33ef4018da87ed58b0d4ce2dfdc2bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 20:58:52.788111   19551 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/apiserver.key.2bbb6ab6 ...
	I0505 20:58:52.788127   19551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/apiserver.key.2bbb6ab6: {Name:mk6850b3807c47a8030388d9e2df00e859760544 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 20:58:52.788219   19551 certs.go:381] copying /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/apiserver.crt.2bbb6ab6 -> /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/apiserver.crt
	I0505 20:58:52.788308   19551 certs.go:385] copying /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/apiserver.key.2bbb6ab6 -> /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/apiserver.key
	I0505 20:58:52.788377   19551 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/proxy-client.key
	I0505 20:58:52.788403   19551 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/proxy-client.crt with IP's: []
	I0505 20:58:52.917147   19551 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/proxy-client.crt ...
	I0505 20:58:52.917175   19551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/proxy-client.crt: {Name:mk5227d7b6aadc569f4e72cd5f4cc833e89dc2ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 20:58:52.917349   19551 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/proxy-client.key ...
	I0505 20:58:52.917363   19551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/proxy-client.key: {Name:mk2cc3cfd4eb822fb567db7c94bb8e67039e2892 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 20:58:52.917566   19551 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem (1675 bytes)
	I0505 20:58:52.917619   19551 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem (1078 bytes)
	I0505 20:58:52.917655   19551 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem (1123 bytes)
	I0505 20:58:52.917689   19551 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem (1675 bytes)
	I0505 20:58:52.918259   19551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0505 20:58:52.951260   19551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0505 20:58:52.981474   19551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0505 20:58:53.013443   19551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0505 20:58:53.043503   19551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0505 20:58:53.069988   19551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0505 20:58:53.098788   19551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0505 20:58:53.127765   19551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0505 20:58:53.172749   19551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0505 20:58:53.200287   19551 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0505 20:58:53.218429   19551 ssh_runner.go:195] Run: openssl version
	I0505 20:58:53.224924   19551 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0505 20:58:53.236444   19551 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0505 20:58:53.241512   19551 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  5 20:58 /usr/share/ca-certificates/minikubeCA.pem
	I0505 20:58:53.241567   19551 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0505 20:58:53.247757   19551 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0505 20:58:53.259783   19551 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0505 20:58:53.264416   19551 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0505 20:58:53.264469   19551 kubeadm.go:391] StartCluster: {Name:addons-476078 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 C
lusterName:addons-476078 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 20:58:53.264564   19551 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0505 20:58:53.264625   19551 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0505 20:58:53.304288   19551 cri.go:89] found id: ""
	I0505 20:58:53.304356   19551 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0505 20:58:53.316757   19551 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0505 20:58:53.327832   19551 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0505 20:58:53.338498   19551 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0505 20:58:53.338518   19551 kubeadm.go:156] found existing configuration files:
	
	I0505 20:58:53.338594   19551 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0505 20:58:53.348729   19551 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0505 20:58:53.348789   19551 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0505 20:58:53.359306   19551 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0505 20:58:53.369269   19551 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0505 20:58:53.369324   19551 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0505 20:58:53.379770   19551 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0505 20:58:53.389591   19551 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0505 20:58:53.389637   19551 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0505 20:58:53.400134   19551 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0505 20:58:53.409862   19551 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0505 20:58:53.409892   19551 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0505 20:58:53.420026   19551 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0505 20:58:53.478620   19551 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0505 20:58:53.478726   19551 kubeadm.go:309] [preflight] Running pre-flight checks
	I0505 20:58:53.617536   19551 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0505 20:58:53.617677   19551 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0505 20:58:53.617804   19551 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0505 20:58:53.841994   19551 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0505 20:58:54.044603   19551 out.go:204]   - Generating certificates and keys ...
	I0505 20:58:54.044763   19551 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0505 20:58:54.044851   19551 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0505 20:58:54.044963   19551 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0505 20:58:54.045056   19551 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0505 20:58:54.178170   19551 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0505 20:58:54.222250   19551 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0505 20:58:54.357687   19551 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0505 20:58:54.357851   19551 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-476078 localhost] and IPs [192.168.39.102 127.0.0.1 ::1]
	I0505 20:58:54.510379   19551 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0505 20:58:54.510544   19551 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-476078 localhost] and IPs [192.168.39.102 127.0.0.1 ::1]
	I0505 20:58:54.678675   19551 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0505 20:58:55.017961   19551 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0505 20:58:55.164159   19551 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0505 20:58:55.164280   19551 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0505 20:58:55.226065   19551 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0505 20:58:55.438189   19551 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0505 20:58:55.499677   19551 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0505 20:58:55.708458   19551 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0505 20:58:55.842164   19551 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0505 20:58:55.842381   19551 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0505 20:58:55.845799   19551 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0505 20:58:55.847621   19551 out.go:204]   - Booting up control plane ...
	I0505 20:58:55.847714   19551 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0505 20:58:55.847797   19551 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0505 20:58:55.847889   19551 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0505 20:58:55.864623   19551 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0505 20:58:55.865493   19551 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0505 20:58:55.865563   19551 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0505 20:58:56.017849   19551 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0505 20:58:56.017954   19551 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0505 20:58:57.018316   19551 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001049337s
	I0505 20:58:57.018430   19551 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0505 20:59:02.018068   19551 kubeadm.go:309] [api-check] The API server is healthy after 5.001373355s
	I0505 20:59:02.033433   19551 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0505 20:59:02.553298   19551 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0505 20:59:02.585907   19551 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0505 20:59:02.586264   19551 kubeadm.go:309] [mark-control-plane] Marking the node addons-476078 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0505 20:59:02.600051   19551 kubeadm.go:309] [bootstrap-token] Using token: m2k46n.atcee0it0y39276n
	I0505 20:59:02.601455   19551 out.go:204]   - Configuring RBAC rules ...
	I0505 20:59:02.601568   19551 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0505 20:59:02.609367   19551 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0505 20:59:02.620096   19551 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0505 20:59:02.624949   19551 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0505 20:59:02.627835   19551 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0505 20:59:02.632967   19551 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0505 20:59:02.745274   19551 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0505 20:59:03.190426   19551 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0505 20:59:03.744916   19551 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0505 20:59:03.747008   19551 kubeadm.go:309] 
	I0505 20:59:03.747080   19551 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0505 20:59:03.747101   19551 kubeadm.go:309] 
	I0505 20:59:03.747177   19551 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0505 20:59:03.747189   19551 kubeadm.go:309] 
	I0505 20:59:03.747222   19551 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0505 20:59:03.747268   19551 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0505 20:59:03.747316   19551 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0505 20:59:03.747323   19551 kubeadm.go:309] 
	I0505 20:59:03.747363   19551 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0505 20:59:03.747369   19551 kubeadm.go:309] 
	I0505 20:59:03.747404   19551 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0505 20:59:03.747410   19551 kubeadm.go:309] 
	I0505 20:59:03.747449   19551 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0505 20:59:03.747543   19551 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0505 20:59:03.747659   19551 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0505 20:59:03.747681   19551 kubeadm.go:309] 
	I0505 20:59:03.747783   19551 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0505 20:59:03.747877   19551 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0505 20:59:03.747889   19551 kubeadm.go:309] 
	I0505 20:59:03.748002   19551 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token m2k46n.atcee0it0y39276n \
	I0505 20:59:03.748161   19551 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6a26f18bad1f4f0bdeab00e50a185d34e4c63698b4d623f2ccf5d34207e02541 \
	I0505 20:59:03.748199   19551 kubeadm.go:309] 	--control-plane 
	I0505 20:59:03.748216   19551 kubeadm.go:309] 
	I0505 20:59:03.748325   19551 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0505 20:59:03.748332   19551 kubeadm.go:309] 
	I0505 20:59:03.748408   19551 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token m2k46n.atcee0it0y39276n \
	I0505 20:59:03.748540   19551 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6a26f18bad1f4f0bdeab00e50a185d34e4c63698b4d623f2ccf5d34207e02541 
	I0505 20:59:03.748689   19551 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0505 20:59:03.748703   19551 cni.go:84] Creating CNI manager for ""
	I0505 20:59:03.748713   19551 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0505 20:59:03.750679   19551 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0505 20:59:03.752142   19551 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0505 20:59:03.767520   19551 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0505 20:59:03.788783   19551 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0505 20:59:03.788852   19551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 20:59:03.788852   19551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-476078 minikube.k8s.io/updated_at=2024_05_05T20_59_03_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=182cbbc99574885c654f8e32902368a71f76ddd3 minikube.k8s.io/name=addons-476078 minikube.k8s.io/primary=true
	I0505 20:59:03.846495   19551 ops.go:34] apiserver oom_adj: -16
	I0505 20:59:03.959981   19551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 20:59:04.460385   19551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 20:59:04.960940   19551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 20:59:05.460366   19551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 20:59:05.960786   19551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 20:59:06.460471   19551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 20:59:06.960026   19551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 20:59:07.460246   19551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 20:59:07.959980   19551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 20:59:08.460209   19551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 20:59:08.960611   19551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 20:59:09.460194   19551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 20:59:09.960695   19551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 20:59:10.460651   19551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 20:59:10.960288   19551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 20:59:11.460430   19551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 20:59:11.960700   19551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 20:59:12.460477   19551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 20:59:12.960683   19551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 20:59:13.460332   19551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 20:59:13.961042   19551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 20:59:14.460168   19551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 20:59:14.960455   19551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 20:59:15.460071   19551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 20:59:15.960299   19551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 20:59:16.460865   19551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 20:59:16.552841   19551 kubeadm.go:1107] duration metric: took 12.764049588s to wait for elevateKubeSystemPrivileges
	W0505 20:59:16.552895   19551 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0505 20:59:16.552908   19551 kubeadm.go:393] duration metric: took 23.288442045s to StartCluster
	I0505 20:59:16.552938   19551 settings.go:142] acquiring lock: {Name:mkbe19b7965e4b0b9928cd2b7b56f51dec95b157 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 20:59:16.553096   19551 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18602-11466/kubeconfig
	I0505 20:59:16.553641   19551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/kubeconfig: {Name:mk083e789ff55e795c8f0eb5d298a0e27ad9cdde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 20:59:16.553865   19551 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0505 20:59:16.553891   19551 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0505 20:59:16.555818   19551 out.go:177] * Verifying Kubernetes components...
	I0505 20:59:16.553969   19551 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0505 20:59:16.554089   19551 config.go:182] Loaded profile config "addons-476078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 20:59:16.557304   19551 addons.go:69] Setting yakd=true in profile "addons-476078"
	I0505 20:59:16.557309   19551 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 20:59:16.557321   19551 addons.go:69] Setting default-storageclass=true in profile "addons-476078"
	I0505 20:59:16.557325   19551 addons.go:69] Setting cloud-spanner=true in profile "addons-476078"
	I0505 20:59:16.557303   19551 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-476078"
	I0505 20:59:16.557347   19551 addons.go:69] Setting ingress=true in profile "addons-476078"
	I0505 20:59:16.557361   19551 addons.go:69] Setting ingress-dns=true in profile "addons-476078"
	I0505 20:59:16.557372   19551 addons.go:234] Setting addon cloud-spanner=true in "addons-476078"
	I0505 20:59:16.557380   19551 addons.go:234] Setting addon ingress-dns=true in "addons-476078"
	I0505 20:59:16.557389   19551 addons.go:69] Setting helm-tiller=true in profile "addons-476078"
	I0505 20:59:16.557390   19551 addons.go:69] Setting inspektor-gadget=true in profile "addons-476078"
	I0505 20:59:16.557379   19551 addons.go:69] Setting gcp-auth=true in profile "addons-476078"
	I0505 20:59:16.557396   19551 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-476078"
	I0505 20:59:16.557406   19551 addons.go:234] Setting addon helm-tiller=true in "addons-476078"
	I0505 20:59:16.557416   19551 addons.go:234] Setting addon inspektor-gadget=true in "addons-476078"
	I0505 20:59:16.557429   19551 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-476078"
	I0505 20:59:16.557431   19551 mustload.go:65] Loading cluster: addons-476078
	I0505 20:59:16.557334   19551 addons.go:234] Setting addon yakd=true in "addons-476078"
	I0505 20:59:16.557434   19551 host.go:66] Checking if "addons-476078" exists ...
	I0505 20:59:16.557442   19551 host.go:66] Checking if "addons-476078" exists ...
	I0505 20:59:16.557428   19551 addons.go:69] Setting storage-provisioner=true in profile "addons-476078"
	I0505 20:59:16.557431   19551 addons.go:69] Setting volcano=true in profile "addons-476078"
	I0505 20:59:16.557461   19551 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-476078"
	I0505 20:59:16.557479   19551 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-476078"
	I0505 20:59:16.557480   19551 addons.go:234] Setting addon storage-provisioner=true in "addons-476078"
	I0505 20:59:16.557490   19551 addons.go:234] Setting addon volcano=true in "addons-476078"
	I0505 20:59:16.557495   19551 host.go:66] Checking if "addons-476078" exists ...
	I0505 20:59:16.557355   19551 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-476078"
	I0505 20:59:16.557523   19551 host.go:66] Checking if "addons-476078" exists ...
	I0505 20:59:16.557556   19551 host.go:66] Checking if "addons-476078" exists ...
	I0505 20:59:16.557430   19551 host.go:66] Checking if "addons-476078" exists ...
	I0505 20:59:16.557709   19551 config.go:182] Loaded profile config "addons-476078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 20:59:16.557912   19551 addons.go:69] Setting registry=true in profile "addons-476078"
	I0505 20:59:16.557936   19551 addons.go:234] Setting addon registry=true in "addons-476078"
	I0505 20:59:16.557941   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.557947   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.557956   19551 host.go:66] Checking if "addons-476078" exists ...
	I0505 20:59:16.557958   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.557965   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.557967   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.557975   19551 addons.go:69] Setting volumesnapshots=true in profile "addons-476078"
	I0505 20:59:16.558007   19551 addons.go:234] Setting addon volumesnapshots=true in "addons-476078"
	I0505 20:59:16.557977   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.557941   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.557378   19551 addons.go:234] Setting addon ingress=true in "addons-476078"
	I0505 20:59:16.558037   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.557451   19551 host.go:66] Checking if "addons-476078" exists ...
	I0505 20:59:16.558085   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.558219   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.558231   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.558239   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.558253   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.558285   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.557935   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.558350   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.558352   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.558379   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.558411   19551 host.go:66] Checking if "addons-476078" exists ...
	I0505 20:59:16.557961   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.558421   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.558425   19551 host.go:66] Checking if "addons-476078" exists ...
	I0505 20:59:16.558435   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.557315   19551 addons.go:69] Setting metrics-server=true in profile "addons-476078"
	I0505 20:59:16.558517   19551 addons.go:234] Setting addon metrics-server=true in "addons-476078"
	I0505 20:59:16.557391   19551 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-476078"
	I0505 20:59:16.558654   19551 host.go:66] Checking if "addons-476078" exists ...
	I0505 20:59:16.558664   19551 host.go:66] Checking if "addons-476078" exists ...
	I0505 20:59:16.558665   19551 host.go:66] Checking if "addons-476078" exists ...
	I0505 20:59:16.558719   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.558722   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.558748   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.558862   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.558873   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.579601   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37975
	I0505 20:59:16.579601   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40835
	I0505 20:59:16.579923   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38387
	I0505 20:59:16.580053   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.580187   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.580306   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.580558   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.580587   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.580756   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.580777   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.580909   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.580923   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.581257   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.581312   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.581354   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.581594   19551 main.go:141] libmachine: (addons-476078) Calling .GetState
	I0505 20:59:16.581642   19551 main.go:141] libmachine: (addons-476078) Calling .GetState
	I0505 20:59:16.582066   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.582090   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.586371   19551 addons.go:234] Setting addon default-storageclass=true in "addons-476078"
	I0505 20:59:16.586403   19551 host.go:66] Checking if "addons-476078" exists ...
	I0505 20:59:16.586439   19551 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-476078"
	I0505 20:59:16.586471   19551 host.go:66] Checking if "addons-476078" exists ...
	I0505 20:59:16.586667   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.586697   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.586825   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.586870   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.590389   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42741
	I0505 20:59:16.592024   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.592053   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.592706   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.592742   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.593277   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.593331   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.599529   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44805
	I0505 20:59:16.599536   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43719
	I0505 20:59:16.599545   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.600143   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.600162   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.600461   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.600528   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.601112   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.601146   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.601382   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.601466   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.601477   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.601764   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.601922   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.601936   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.602378   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.602412   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.602597   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.603177   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.603207   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.614145   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37037
	I0505 20:59:16.615073   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.615753   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.615772   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.615923   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36303
	I0505 20:59:16.616131   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.616674   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.616712   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.617293   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.617960   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.617977   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.620560   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.621294   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.621335   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.621897   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43531
	I0505 20:59:16.622371   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.622856   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.622872   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.623207   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.623767   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.623790   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.625716   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46089
	I0505 20:59:16.626581   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.627122   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.627138   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.627461   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.628017   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.628051   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.628281   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39611
	I0505 20:59:16.629186   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.629746   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.629762   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.630093   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.630641   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.630673   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.633558   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40085
	I0505 20:59:16.634046   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.634555   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.634572   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.634937   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.635122   19551 main.go:141] libmachine: (addons-476078) Calling .GetState
	I0505 20:59:16.636989   19551 main.go:141] libmachine: (addons-476078) Calling .DriverName
	I0505 20:59:16.639388   19551 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0505 20:59:16.639211   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40145
	I0505 20:59:16.640832   19551 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0505 20:59:16.640847   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0505 20:59:16.640864   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHHostname
	I0505 20:59:16.641275   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.641688   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.641705   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.642025   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.642605   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.642643   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.644376   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:16.644927   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:59:16.644959   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:16.645126   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHPort
	I0505 20:59:16.645298   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:59:16.645436   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHUsername
	I0505 20:59:16.645559   19551 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/addons-476078/id_rsa Username:docker}
	I0505 20:59:16.646015   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34671
	I0505 20:59:16.646449   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.646937   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.646959   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.647868   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.648440   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.648474   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.649940   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35571
	I0505 20:59:16.650052   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42527
	I0505 20:59:16.650441   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.650499   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.650949   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.650965   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.651101   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.651110   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.651503   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.652149   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.652199   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.652655   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.652827   19551 main.go:141] libmachine: (addons-476078) Calling .GetState
	I0505 20:59:16.654361   19551 main.go:141] libmachine: (addons-476078) Calling .DriverName
	I0505 20:59:16.656392   19551 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0505 20:59:16.657811   19551 out.go:177]   - Using image docker.io/busybox:stable
	I0505 20:59:16.656271   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34953
	I0505 20:59:16.656823   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39929
	I0505 20:59:16.659680   19551 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0505 20:59:16.659701   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0505 20:59:16.659717   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHHostname
	I0505 20:59:16.658234   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.658461   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.658942   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43059
	I0505 20:59:16.659433   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33407
	I0505 20:59:16.660291   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.660315   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.660700   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.661523   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.661539   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.661596   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.661732   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.661741   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.662144   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.662314   19551 main.go:141] libmachine: (addons-476078) Calling .GetState
	I0505 20:59:16.663005   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.663025   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:16.663034   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.663059   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.663261   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.663442   19551 main.go:141] libmachine: (addons-476078) Calling .GetState
	I0505 20:59:16.663533   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.663575   19551 main.go:141] libmachine: (addons-476078) Calling .GetState
	I0505 20:59:16.663641   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:59:16.663658   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:16.663685   19551 main.go:141] libmachine: (addons-476078) Calling .GetState
	I0505 20:59:16.663734   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHPort
	I0505 20:59:16.663886   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:59:16.664000   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHUsername
	I0505 20:59:16.664116   19551 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/addons-476078/id_rsa Username:docker}
	I0505 20:59:16.665014   19551 main.go:141] libmachine: (addons-476078) Calling .DriverName
	I0505 20:59:16.667064   19551 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0505 20:59:16.665932   19551 main.go:141] libmachine: (addons-476078) Calling .DriverName
	I0505 20:59:16.666598   19551 main.go:141] libmachine: (addons-476078) Calling .DriverName
	I0505 20:59:16.666757   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35251
	I0505 20:59:16.667219   19551 main.go:141] libmachine: (addons-476078) Calling .DriverName
	I0505 20:59:16.668338   19551 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0505 20:59:16.668349   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0505 20:59:16.668365   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHHostname
	I0505 20:59:16.669529   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.670870   19551 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.27.0
	I0505 20:59:16.669626   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41401
	I0505 20:59:16.670093   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.670624   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35365
	I0505 20:59:16.671217   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45489
	I0505 20:59:16.672706   19551 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0505 20:59:16.672841   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:16.673466   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.673943   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.673662   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHPort
	I0505 20:59:16.673891   19551 out.go:177]   - Using image docker.io/registry:2.8.3
	I0505 20:59:16.674117   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0505 20:59:16.674150   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:59:16.673918   19551 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0505 20:59:16.674609   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.675461   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.675473   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHHostname
	I0505 20:59:16.675529   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:16.676900   19551 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0505 20:59:16.676915   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0505 20:59:16.676933   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHHostname
	I0505 20:59:16.675548   19551 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0505 20:59:16.674749   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:59:16.674765   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33021
	I0505 20:59:16.674875   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.674987   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.675151   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.674638   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36489
	I0505 20:59:16.675859   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.677240   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37089
	I0505 20:59:16.678980   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.678995   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.679071   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.679143   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHUsername
	I0505 20:59:16.679269   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.679279   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.679408   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.679577   19551 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0505 20:59:16.679591   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0505 20:59:16.679605   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHHostname
	I0505 20:59:16.679845   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.679876   19551 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/addons-476078/id_rsa Username:docker}
	I0505 20:59:16.680445   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.680461   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.680780   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.680826   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.680986   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.681087   19551 main.go:141] libmachine: (addons-476078) Calling .GetState
	I0505 20:59:16.681175   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.681190   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.681701   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45199
	I0505 20:59:16.681810   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:16.681831   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:59:16.681846   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:16.681916   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.681957   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHPort
	I0505 20:59:16.682166   19551 main.go:141] libmachine: (addons-476078) Calling .GetState
	I0505 20:59:16.682194   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:59:16.682299   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHUsername
	I0505 20:59:16.682667   19551 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/addons-476078/id_rsa Username:docker}
	I0505 20:59:16.682683   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45573
	I0505 20:59:16.682696   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:16.683061   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.683112   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.683130   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.683145   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.683369   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.683515   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.683535   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.683613   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.683656   19551 main.go:141] libmachine: (addons-476078) Calling .GetState
	I0505 20:59:16.683781   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.683797   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.683804   19551 main.go:141] libmachine: (addons-476078) Calling .GetState
	I0505 20:59:16.684218   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.684256   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.684299   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.684358   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.684615   19551 main.go:141] libmachine: (addons-476078) Calling .DriverName
	I0505 20:59:16.684666   19551 main.go:141] libmachine: (addons-476078) Calling .GetState
	I0505 20:59:16.686421   19551 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.16
	I0505 20:59:16.687814   19551 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0505 20:59:16.686455   19551 main.go:141] libmachine: (addons-476078) Calling .DriverName
	I0505 20:59:16.685209   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:59:16.685326   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.685555   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHPort
	I0505 20:59:16.686251   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:16.685094   19551 main.go:141] libmachine: (addons-476078) Calling .GetState
	I0505 20:59:16.686891   19551 host.go:66] Checking if "addons-476078" exists ...
	I0505 20:59:16.688096   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.688137   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.688320   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0505 20:59:16.688342   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHHostname
	I0505 20:59:16.688414   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:16.688447   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:59:16.688470   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:16.687061   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHPort
	I0505 20:59:16.687353   19551 main.go:141] libmachine: (addons-476078) Calling .DriverName
	I0505 20:59:16.687648   19551 main.go:141] libmachine: (addons-476078) Calling .DriverName
	I0505 20:59:16.688653   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:59:16.690091   19551 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0505 20:59:16.688903   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:16.689517   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:59:16.689589   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHUsername
	I0505 20:59:16.690629   19551 main.go:141] libmachine: (addons-476078) Calling .DriverName
	I0505 20:59:16.691236   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:16.691282   19551 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0505 20:59:16.691916   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:16.692292   19551 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0505 20:59:16.692505   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHUsername
	I0505 20:59:16.693477   19551 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0505 20:59:16.692624   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHPort
	I0505 20:59:16.694830   19551 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0505 20:59:16.694845   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0505 20:59:16.694860   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHHostname
	I0505 20:59:16.693582   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:59:16.696143   19551 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0505 20:59:16.696164   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0505 20:59:16.696179   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHHostname
	I0505 20:59:16.693596   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0505 20:59:16.696235   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHHostname
	I0505 20:59:16.693722   19551 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/addons-476078/id_rsa Username:docker}
	I0505 20:59:16.693755   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:16.696542   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:16.693777   19551 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/addons-476078/id_rsa Username:docker}
	I0505 20:59:16.694903   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:16.694999   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:59:16.696962   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHUsername
	I0505 20:59:16.697029   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:16.697052   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:16.697060   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:16.697075   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:16.697230   19551 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/addons-476078/id_rsa Username:docker}
	I0505 20:59:16.697477   19551 main.go:141] libmachine: (addons-476078) DBG | Closing plugin on server side
	I0505 20:59:16.697502   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:16.697514   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	W0505 20:59:16.697636   19551 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0505 20:59:16.699207   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:16.700473   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:16.700883   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:59:16.700910   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:16.700944   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:59:16.700969   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:16.701087   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHPort
	I0505 20:59:16.701251   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:59:16.701437   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHPort
	I0505 20:59:16.701439   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHUsername
	I0505 20:59:16.701586   19551 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/addons-476078/id_rsa Username:docker}
	I0505 20:59:16.701768   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:59:16.701961   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHUsername
	I0505 20:59:16.702143   19551 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/addons-476078/id_rsa Username:docker}
	I0505 20:59:16.703086   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:16.703547   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:59:16.703572   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:16.703723   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHPort
	I0505 20:59:16.703861   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:59:16.703957   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHUsername
	I0505 20:59:16.704048   19551 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/addons-476078/id_rsa Username:docker}
	I0505 20:59:16.712033   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33691
	I0505 20:59:16.712607   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.713184   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.713204   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.713252   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43273
	I0505 20:59:16.713597   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.713838   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.713868   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34577
	I0505 20:59:16.713939   19551 main.go:141] libmachine: (addons-476078) Calling .GetState
	I0505 20:59:16.714368   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.714390   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.714462   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.714708   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.714903   19551 main.go:141] libmachine: (addons-476078) Calling .GetState
	I0505 20:59:16.714967   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.714989   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.715297   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.715466   19551 main.go:141] libmachine: (addons-476078) Calling .DriverName
	I0505 20:59:16.716223   19551 main.go:141] libmachine: (addons-476078) Calling .DriverName
	I0505 20:59:16.716637   19551 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0505 20:59:16.716652   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0505 20:59:16.716670   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHHostname
	I0505 20:59:16.717235   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39359
	I0505 20:59:16.717285   19551 main.go:141] libmachine: (addons-476078) Calling .DriverName
	I0505 20:59:16.719372   19551 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0505 20:59:16.717691   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.720089   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:16.720747   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:59:16.720776   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:16.721994   19551 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0505 20:59:16.720654   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46591
	I0505 20:59:16.720688   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHPort
	I0505 20:59:16.721334   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.723043   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.724150   19551 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0505 20:59:16.725504   19551 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0505 20:59:16.723220   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:59:16.723368   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.723432   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:16.727858   19551 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0505 20:59:16.729127   19551 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0505 20:59:16.726948   19551 main.go:141] libmachine: (addons-476078) Calling .GetState
	I0505 20:59:16.726971   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHUsername
	I0505 20:59:16.727294   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:16.730151   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:16.731377   19551 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0505 20:59:16.730326   19551 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/addons-476078/id_rsa Username:docker}
	I0505 20:59:16.730465   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:16.731742   19551 main.go:141] libmachine: (addons-476078) Calling .DriverName
	I0505 20:59:16.733647   19551 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0505 20:59:16.734677   19551 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0505 20:59:16.732754   19551 main.go:141] libmachine: (addons-476078) Calling .GetState
	W0505 20:59:16.733229   19551 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:34158->192.168.39.102:22: read: connection reset by peer
	I0505 20:59:16.736021   19551 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0505 20:59:16.737226   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0505 20:59:16.737231   19551 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0505 20:59:16.737249   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHHostname
	I0505 20:59:16.736045   19551 retry.go:31] will retry after 287.519499ms: ssh: handshake failed: read tcp 192.168.39.1:34158->192.168.39.102:22: read: connection reset by peer
	I0505 20:59:16.737279   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0505 20:59:16.737440   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHHostname
	I0505 20:59:16.737440   19551 main.go:141] libmachine: (addons-476078) Calling .DriverName
	I0505 20:59:16.739340   19551 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0505 20:59:16.740099   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:16.740590   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHPort
	I0505 20:59:16.741275   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHPort
	I0505 20:59:16.741281   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:59:16.741299   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:16.740809   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:16.741250   19551 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0505 20:59:16.741337   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:59:16.742613   19551 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0505 20:59:16.742644   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:16.741427   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:59:16.741464   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:59:16.743991   19551 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0505 20:59:16.744004   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0505 20:59:16.744017   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHHostname
	I0505 20:59:16.744041   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHUsername
	I0505 20:59:16.744115   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHUsername
	I0505 20:59:16.744194   19551 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/addons-476078/id_rsa Username:docker}
	I0505 20:59:16.744255   19551 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/addons-476078/id_rsa Username:docker}
	I0505 20:59:16.746961   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:16.747321   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:59:16.747350   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:16.747475   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHPort
	I0505 20:59:16.747624   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:59:16.747749   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHUsername
	I0505 20:59:16.747856   19551 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/addons-476078/id_rsa Username:docker}
	W0505 20:59:16.748513   19551 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:34166->192.168.39.102:22: read: connection reset by peer
	I0505 20:59:16.748539   19551 retry.go:31] will retry after 353.400197ms: ssh: handshake failed: read tcp 192.168.39.1:34166->192.168.39.102:22: read: connection reset by peer
	W0505 20:59:16.748660   19551 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:34168->192.168.39.102:22: read: connection reset by peer
	I0505 20:59:16.748676   19551 retry.go:31] will retry after 245.1848ms: ssh: handshake failed: read tcp 192.168.39.1:34168->192.168.39.102:22: read: connection reset by peer
	W0505 20:59:16.773951   19551 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:34178->192.168.39.102:22: read: connection reset by peer
	I0505 20:59:16.773976   19551 retry.go:31] will retry after 240.283066ms: ssh: handshake failed: read tcp 192.168.39.1:34178->192.168.39.102:22: read: connection reset by peer
	I0505 20:59:16.927718   19551 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0505 20:59:16.927731   19551 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0505 20:59:16.967596   19551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0505 20:59:17.011049   19551 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0505 20:59:17.011083   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0505 20:59:17.030853   19551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0505 20:59:17.075422   19551 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0505 20:59:17.075450   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0505 20:59:17.111745   19551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0505 20:59:17.116694   19551 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0505 20:59:17.116718   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0505 20:59:17.154648   19551 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0505 20:59:17.154672   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0505 20:59:17.158704   19551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0505 20:59:17.161399   19551 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0505 20:59:17.161419   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0505 20:59:17.180257   19551 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0505 20:59:17.180281   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0505 20:59:17.222716   19551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0505 20:59:17.250353   19551 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0505 20:59:17.250383   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0505 20:59:17.339397   19551 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0505 20:59:17.339423   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0505 20:59:17.350090   19551 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0505 20:59:17.350110   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0505 20:59:17.384464   19551 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0505 20:59:17.384484   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0505 20:59:17.397303   19551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0505 20:59:17.404249   19551 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0505 20:59:17.404273   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0505 20:59:17.444884   19551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0505 20:59:17.468462   19551 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0505 20:59:17.468488   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0505 20:59:17.555354   19551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0505 20:59:17.556518   19551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0505 20:59:17.587828   19551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0505 20:59:17.588019   19551 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0505 20:59:17.588042   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0505 20:59:17.668921   19551 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0505 20:59:17.668956   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0505 20:59:17.698341   19551 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0505 20:59:17.698372   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0505 20:59:17.780921   19551 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0505 20:59:17.780946   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0505 20:59:17.787900   19551 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0505 20:59:17.787917   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0505 20:59:17.876959   19551 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0505 20:59:17.876993   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0505 20:59:17.940401   19551 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0505 20:59:17.940424   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0505 20:59:18.064516   19551 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0505 20:59:18.064540   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0505 20:59:18.128688   19551 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0505 20:59:18.128720   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0505 20:59:18.140619   19551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0505 20:59:18.283601   19551 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0505 20:59:18.283633   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0505 20:59:18.520153   19551 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0505 20:59:18.520177   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0505 20:59:18.524013   19551 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0505 20:59:18.524034   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0505 20:59:18.628736   19551 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0505 20:59:18.628760   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0505 20:59:18.800885   19551 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0505 20:59:18.800908   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0505 20:59:18.972709   19551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0505 20:59:19.021984   19551 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.094176572s)
	I0505 20:59:19.022013   19551 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0505 20:59:19.022022   19551 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.094269417s)
	I0505 20:59:19.022747   19551 node_ready.go:35] waiting up to 6m0s for node "addons-476078" to be "Ready" ...
	I0505 20:59:19.054466   19551 node_ready.go:49] node "addons-476078" has status "Ready":"True"
	I0505 20:59:19.054489   19551 node_ready.go:38] duration metric: took 31.696523ms for node "addons-476078" to be "Ready" ...
	I0505 20:59:19.054498   19551 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0505 20:59:19.072847   19551 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-gnhf4" in "kube-system" namespace to be "Ready" ...
	I0505 20:59:19.152202   19551 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0505 20:59:19.152230   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0505 20:59:19.356460   19551 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0505 20:59:19.356495   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0505 20:59:19.466227   19551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0505 20:59:19.528257   19551 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-476078" context rescaled to 1 replicas
	I0505 20:59:19.630907   19551 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0505 20:59:19.630927   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0505 20:59:19.846601   19551 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0505 20:59:19.846629   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0505 20:59:20.226165   19551 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0505 20:59:20.226186   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0505 20:59:20.676164   19551 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0505 20:59:20.676188   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0505 20:59:20.845068   19551 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0505 20:59:20.845091   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0505 20:59:21.080247   19551 pod_ready.go:102] pod "coredns-7db6d8ff4d-gnhf4" in "kube-system" namespace has status "Ready":"False"
	I0505 20:59:21.284462   19551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0505 20:59:23.405935   19551 pod_ready.go:102] pod "coredns-7db6d8ff4d-gnhf4" in "kube-system" namespace has status "Ready":"False"
	I0505 20:59:23.471678   19551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.50403672s)
	I0505 20:59:23.471698   19551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.440810283s)
	I0505 20:59:23.471729   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:23.471744   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:23.471768   19551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.359989931s)
	I0505 20:59:23.471802   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:23.471817   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:23.471729   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:23.471880   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:23.472049   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:23.472069   19551 main.go:141] libmachine: (addons-476078) DBG | Closing plugin on server side
	I0505 20:59:23.472073   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:23.472105   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:23.472113   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:23.472122   19551 main.go:141] libmachine: (addons-476078) DBG | Closing plugin on server side
	I0505 20:59:23.472154   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:23.472175   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:23.472179   19551 main.go:141] libmachine: (addons-476078) DBG | Closing plugin on server side
	I0505 20:59:23.472185   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:23.472192   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:23.472258   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:23.472325   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:23.472346   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:23.472353   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:23.472376   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:23.472365   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:23.472324   19551 main.go:141] libmachine: (addons-476078) DBG | Closing plugin on server side
	I0505 20:59:23.472453   19551 main.go:141] libmachine: (addons-476078) DBG | Closing plugin on server side
	I0505 20:59:23.472454   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:23.472464   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:23.472671   19551 main.go:141] libmachine: (addons-476078) DBG | Closing plugin on server side
	I0505 20:59:23.472721   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:23.472731   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:23.694472   19551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.535737185s)
	I0505 20:59:23.694517   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:23.694528   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:23.694549   19551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.471798915s)
	I0505 20:59:23.694586   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:23.694598   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:23.694603   19551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.297277226s)
	I0505 20:59:23.694619   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:23.694632   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:23.694646   19551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.249736475s)
	I0505 20:59:23.694673   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:23.694682   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:23.694786   19551 main.go:141] libmachine: (addons-476078) DBG | Closing plugin on server side
	I0505 20:59:23.694828   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:23.694835   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:23.694844   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:23.694847   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:23.694858   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:23.694866   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:23.694873   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:23.694898   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:23.694851   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:23.694911   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:23.694922   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:23.694930   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:23.695017   19551 main.go:141] libmachine: (addons-476078) DBG | Closing plugin on server side
	I0505 20:59:23.695027   19551 main.go:141] libmachine: (addons-476078) DBG | Closing plugin on server side
	I0505 20:59:23.695050   19551 main.go:141] libmachine: (addons-476078) DBG | Closing plugin on server side
	I0505 20:59:23.695053   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:23.695060   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:23.695068   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:23.695075   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:23.695089   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:23.695111   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:23.695310   19551 main.go:141] libmachine: (addons-476078) DBG | Closing plugin on server side
	I0505 20:59:23.695337   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:23.695344   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:23.695495   19551 main.go:141] libmachine: (addons-476078) DBG | Closing plugin on server side
	I0505 20:59:23.695528   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:23.695535   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:23.695543   19551 addons.go:475] Verifying addon registry=true in "addons-476078"
	I0505 20:59:23.697982   19551 out.go:177] * Verifying registry addon...
	I0505 20:59:23.695704   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:23.695722   19551 main.go:141] libmachine: (addons-476078) DBG | Closing plugin on server side
	I0505 20:59:23.699578   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:23.700466   19551 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0505 20:59:23.872551   19551 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0505 20:59:23.872582   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:23.892506   19551 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0505 20:59:23.892537   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHHostname
	I0505 20:59:23.895893   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:23.896311   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:59:23.896341   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:23.896533   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHPort
	I0505 20:59:23.896745   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:59:23.896914   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHUsername
	I0505 20:59:23.897052   19551 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/addons-476078/id_rsa Username:docker}
	I0505 20:59:23.933692   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:23.933717   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:23.934018   19551 main.go:141] libmachine: (addons-476078) DBG | Closing plugin on server side
	I0505 20:59:23.934061   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:23.934072   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:23.957239   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:23.957258   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:23.957562   19551 main.go:141] libmachine: (addons-476078) DBG | Closing plugin on server side
	I0505 20:59:23.957599   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:23.957608   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	W0505 20:59:23.957714   19551 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0505 20:59:24.294391   19551 pod_ready.go:92] pod "coredns-7db6d8ff4d-gnhf4" in "kube-system" namespace has status "Ready":"True"
	I0505 20:59:24.294417   19551 pod_ready.go:81] duration metric: took 5.22153717s for pod "coredns-7db6d8ff4d-gnhf4" in "kube-system" namespace to be "Ready" ...
	I0505 20:59:24.294427   19551 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-gpclx" in "kube-system" namespace to be "Ready" ...
	I0505 20:59:24.348859   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:24.356512   19551 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0505 20:59:24.420462   19551 pod_ready.go:92] pod "coredns-7db6d8ff4d-gpclx" in "kube-system" namespace has status "Ready":"True"
	I0505 20:59:24.420485   19551 pod_ready.go:81] duration metric: took 126.050935ms for pod "coredns-7db6d8ff4d-gpclx" in "kube-system" namespace to be "Ready" ...
	I0505 20:59:24.420494   19551 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-476078" in "kube-system" namespace to be "Ready" ...
	I0505 20:59:24.514662   19551 addons.go:234] Setting addon gcp-auth=true in "addons-476078"
	I0505 20:59:24.514741   19551 host.go:66] Checking if "addons-476078" exists ...
	I0505 20:59:24.515090   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:24.515123   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:24.529975   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39771
	I0505 20:59:24.530404   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:24.530927   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:24.530957   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:24.531314   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:24.534705   19551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 20:59:24.534761   19551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 20:59:24.549833   19551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46013
	I0505 20:59:24.550318   19551 main.go:141] libmachine: () Calling .GetVersion
	I0505 20:59:24.550839   19551 main.go:141] libmachine: Using API Version  1
	I0505 20:59:24.550871   19551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 20:59:24.551206   19551 main.go:141] libmachine: () Calling .GetMachineName
	I0505 20:59:24.551392   19551 main.go:141] libmachine: (addons-476078) Calling .GetState
	I0505 20:59:24.552959   19551 main.go:141] libmachine: (addons-476078) Calling .DriverName
	I0505 20:59:24.553205   19551 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0505 20:59:24.553225   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHHostname
	I0505 20:59:24.555604   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:24.555996   19551 main.go:141] libmachine: (addons-476078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:a4:72", ip: ""} in network mk-addons-476078: {Iface:virbr1 ExpiryTime:2024-05-05 21:58:36 +0000 UTC Type:0 Mac:52:54:00:48:a4:72 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:addons-476078 Clientid:01:52:54:00:48:a4:72}
	I0505 20:59:24.556033   19551 main.go:141] libmachine: (addons-476078) DBG | domain addons-476078 has defined IP address 192.168.39.102 and MAC address 52:54:00:48:a4:72 in network mk-addons-476078
	I0505 20:59:24.556349   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHPort
	I0505 20:59:24.556504   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHKeyPath
	I0505 20:59:24.556637   19551 main.go:141] libmachine: (addons-476078) Calling .GetSSHUsername
	I0505 20:59:24.556793   19551 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/addons-476078/id_rsa Username:docker}
	I0505 20:59:24.569280   19551 pod_ready.go:92] pod "etcd-addons-476078" in "kube-system" namespace has status "Ready":"True"
	I0505 20:59:24.569304   19551 pod_ready.go:81] duration metric: took 148.803044ms for pod "etcd-addons-476078" in "kube-system" namespace to be "Ready" ...
	I0505 20:59:24.569316   19551 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-476078" in "kube-system" namespace to be "Ready" ...
	I0505 20:59:24.699445   19551 pod_ready.go:92] pod "kube-apiserver-addons-476078" in "kube-system" namespace has status "Ready":"True"
	I0505 20:59:24.699474   19551 pod_ready.go:81] duration metric: took 130.149403ms for pod "kube-apiserver-addons-476078" in "kube-system" namespace to be "Ready" ...
	I0505 20:59:24.699500   19551 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-476078" in "kube-system" namespace to be "Ready" ...
	I0505 20:59:24.783743   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:24.788587   19551 pod_ready.go:92] pod "kube-controller-manager-addons-476078" in "kube-system" namespace has status "Ready":"True"
	I0505 20:59:24.788619   19551 pod_ready.go:81] duration metric: took 89.108732ms for pod "kube-controller-manager-addons-476078" in "kube-system" namespace to be "Ready" ...
	I0505 20:59:24.788633   19551 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qrfs4" in "kube-system" namespace to be "Ready" ...
	I0505 20:59:24.842698   19551 pod_ready.go:92] pod "kube-proxy-qrfs4" in "kube-system" namespace has status "Ready":"True"
	I0505 20:59:24.842729   19551 pod_ready.go:81] duration metric: took 54.083291ms for pod "kube-proxy-qrfs4" in "kube-system" namespace to be "Ready" ...
	I0505 20:59:24.842742   19551 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-476078" in "kube-system" namespace to be "Ready" ...
	I0505 20:59:24.891986   19551 pod_ready.go:92] pod "kube-scheduler-addons-476078" in "kube-system" namespace has status "Ready":"True"
	I0505 20:59:24.892013   19551 pod_ready.go:81] duration metric: took 49.262475ms for pod "kube-scheduler-addons-476078" in "kube-system" namespace to be "Ready" ...
	I0505 20:59:24.892026   19551 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-4s79g" in "kube-system" namespace to be "Ready" ...
	I0505 20:59:25.207064   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:25.378647   19551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.823252066s)
	I0505 20:59:25.378705   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:25.378718   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:25.378712   19551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.822155801s)
	I0505 20:59:25.378752   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:25.378771   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:25.378789   19551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.790927629s)
	I0505 20:59:25.378826   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:25.378841   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:25.378909   19551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.406153479s)
	I0505 20:59:25.378944   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:25.378964   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:25.379116   19551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.238179313s)
	I0505 20:59:25.379148   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:25.379161   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:25.381008   19551 main.go:141] libmachine: (addons-476078) DBG | Closing plugin on server side
	I0505 20:59:25.381008   19551 main.go:141] libmachine: (addons-476078) DBG | Closing plugin on server side
	I0505 20:59:25.381029   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:25.381049   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:25.381052   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:25.381061   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:25.381066   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:25.381072   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:25.381033   19551 main.go:141] libmachine: (addons-476078) DBG | Closing plugin on server side
	I0505 20:59:25.381082   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:25.381085   19551 main.go:141] libmachine: (addons-476078) DBG | Closing plugin on server side
	I0505 20:59:25.381090   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:25.381094   19551 main.go:141] libmachine: (addons-476078) DBG | Closing plugin on server side
	I0505 20:59:25.381117   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:25.381123   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:25.381078   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:25.381132   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:25.381138   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:25.381062   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:25.381072   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:25.381159   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:25.381164   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:25.381139   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:25.381168   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:25.381146   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:25.381312   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:25.381325   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:25.381334   19551 addons.go:475] Verifying addon metrics-server=true in "addons-476078"
	I0505 20:59:25.381401   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:25.381421   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:25.383255   19551 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-476078 service yakd-dashboard -n yakd-dashboard
	
	I0505 20:59:25.381515   19551 main.go:141] libmachine: (addons-476078) DBG | Closing plugin on server side
	I0505 20:59:25.381539   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:25.381554   19551 main.go:141] libmachine: (addons-476078) DBG | Closing plugin on server side
	I0505 20:59:25.381572   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:25.381597   19551 main.go:141] libmachine: (addons-476078) DBG | Closing plugin on server side
	I0505 20:59:25.381609   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:25.386110   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:25.386128   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:25.386130   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:25.386151   19551 addons.go:475] Verifying addon ingress=true in "addons-476078"
	I0505 20:59:25.387804   19551 out.go:177] * Verifying ingress addon...
	I0505 20:59:25.389918   19551 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0505 20:59:25.403342   19551 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0505 20:59:25.403363   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:25.705990   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:25.895707   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:26.221012   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:26.427386   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:26.584376   19551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.118099812s)
	W0505 20:59:26.584426   19551 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0505 20:59:26.584450   19551 retry.go:31] will retry after 276.362996ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0505 20:59:26.714819   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:26.861259   19551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0505 20:59:26.894189   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:26.904475   19551 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4s79g" in "kube-system" namespace has status "Ready":"False"
	I0505 20:59:27.207175   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:27.394802   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:27.793564   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:27.934271   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:27.961802   19551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.677278095s)
	I0505 20:59:27.961828   19551 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.408603507s)
	I0505 20:59:27.961847   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:27.961862   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:27.963622   19551 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0505 20:59:27.962197   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:27.962232   19551 main.go:141] libmachine: (addons-476078) DBG | Closing plugin on server side
	I0505 20:59:27.963656   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:27.965113   19551 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0505 20:59:27.966299   19551 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0505 20:59:27.966313   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0505 20:59:27.965125   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:27.966369   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:27.966642   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:27.966659   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:27.966670   19551 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-476078"
	I0505 20:59:27.966677   19551 main.go:141] libmachine: (addons-476078) DBG | Closing plugin on server side
	I0505 20:59:27.968121   19551 out.go:177] * Verifying csi-hostpath-driver addon...
	I0505 20:59:27.970165   19551 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0505 20:59:28.001517   19551 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0505 20:59:28.001546   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:28.047135   19551 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0505 20:59:28.047167   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0505 20:59:28.198485   19551 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0505 20:59:28.198513   19551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0505 20:59:28.208121   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:28.395526   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:28.409804   19551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0505 20:59:28.477161   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:28.705798   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:28.894987   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:28.975299   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:29.205998   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:29.403686   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:29.407097   19551 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4s79g" in "kube-system" namespace has status "Ready":"False"
	I0505 20:59:29.489818   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:29.710034   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:29.894606   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:29.978368   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:30.209921   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:30.395368   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:30.414264   19551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.552958721s)
	I0505 20:59:30.414319   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:30.414335   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:30.414621   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:30.414640   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:30.414656   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:30.414664   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:30.414901   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:30.414920   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:30.475809   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:30.733541   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:30.861434   19551 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.451595064s)
	I0505 20:59:30.861494   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:30.861519   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:30.861796   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:30.861818   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:30.861829   19551 main.go:141] libmachine: Making call to close driver server
	I0505 20:59:30.861839   19551 main.go:141] libmachine: (addons-476078) Calling .Close
	I0505 20:59:30.861840   19551 main.go:141] libmachine: (addons-476078) DBG | Closing plugin on server side
	I0505 20:59:30.862085   19551 main.go:141] libmachine: Successfully made call to close driver server
	I0505 20:59:30.862107   19551 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 20:59:30.863990   19551 addons.go:475] Verifying addon gcp-auth=true in "addons-476078"
	I0505 20:59:30.865638   19551 out.go:177] * Verifying gcp-auth addon...
	I0505 20:59:30.868069   19551 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0505 20:59:30.876700   19551 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0505 20:59:30.876716   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:30.908554   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:30.977272   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:31.206654   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:31.371877   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:31.394872   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:31.477809   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:31.705173   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:31.872125   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:31.894961   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:31.904090   19551 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4s79g" in "kube-system" namespace has status "Ready":"False"
	I0505 20:59:31.976975   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:32.206727   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:32.372126   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:32.396639   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:32.477510   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:32.708364   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:32.872682   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:32.895803   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:32.977431   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:33.206346   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:33.372580   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:33.394472   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:33.475342   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:33.706020   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:33.872242   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:33.894687   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:33.975582   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:34.205664   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:34.371930   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:34.395595   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:34.406035   19551 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4s79g" in "kube-system" namespace has status "Ready":"False"
	I0505 20:59:34.476480   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:34.705443   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:34.872484   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:34.896421   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:34.976701   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:35.221526   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:35.372275   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:35.394838   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:35.557529   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:35.706651   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:35.873475   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:35.894894   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:35.985040   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:36.206592   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:36.371549   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:36.394724   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:36.477224   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:36.706065   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:36.873004   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:36.895047   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:36.898501   19551 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4s79g" in "kube-system" namespace has status "Ready":"False"
	I0505 20:59:36.975674   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:37.205889   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:37.372338   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:37.397914   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:37.477574   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:37.706349   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:37.872835   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:37.894916   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:37.976060   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:38.205298   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:38.372967   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:38.395380   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:38.475512   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:38.707604   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:38.872338   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:38.895162   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:38.976349   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:39.206659   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:39.371459   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:39.396695   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:39.397840   19551 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4s79g" in "kube-system" namespace has status "Ready":"False"
	I0505 20:59:39.483165   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:39.706856   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:39.872445   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:39.896779   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:39.977426   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:40.207114   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:40.371924   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:40.395734   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:40.476456   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:40.706257   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:40.873688   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:40.896935   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:41.150406   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:41.206300   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:41.372440   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:41.394924   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:41.402852   19551 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4s79g" in "kube-system" namespace has status "Ready":"False"
	I0505 20:59:41.476943   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:41.708377   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:41.872908   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:41.895882   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:41.976595   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:42.205494   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:42.371598   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:42.394896   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:42.482487   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:42.706047   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:42.872865   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:42.895839   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:42.977330   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:43.205850   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:43.372418   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:43.394766   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:43.476687   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:43.705129   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:43.872844   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:43.895719   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:43.898556   19551 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4s79g" in "kube-system" namespace has status "Ready":"False"
	I0505 20:59:43.981931   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:44.434388   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:44.435029   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:44.435231   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:44.477370   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:44.704645   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:44.871525   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:44.896530   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:44.978050   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:45.208875   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:45.371820   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:45.394764   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:45.476590   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:45.800325   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:45.873113   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:45.896897   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:45.902486   19551 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4s79g" in "kube-system" namespace has status "Ready":"False"
	I0505 20:59:45.976213   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:46.206719   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:46.371366   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:46.399295   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:46.477352   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:46.706305   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:47.164827   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:47.174479   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:47.175474   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:47.207205   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:47.372284   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:47.394634   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:47.476067   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:47.706548   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:47.872618   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:47.896086   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:47.978781   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:48.206851   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:48.372075   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:48.395203   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:48.398685   19551 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-4s79g" in "kube-system" namespace has status "Ready":"True"
	I0505 20:59:48.398710   19551 pod_ready.go:81] duration metric: took 23.506675049s for pod "nvidia-device-plugin-daemonset-4s79g" in "kube-system" namespace to be "Ready" ...
	I0505 20:59:48.398721   19551 pod_ready.go:38] duration metric: took 29.344212848s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0505 20:59:48.398738   19551 api_server.go:52] waiting for apiserver process to appear ...
	I0505 20:59:48.398797   19551 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 20:59:48.416807   19551 api_server.go:72] duration metric: took 31.862880322s to wait for apiserver process to appear ...
	I0505 20:59:48.416822   19551 api_server.go:88] waiting for apiserver healthz status ...
	I0505 20:59:48.416839   19551 api_server.go:253] Checking apiserver healthz at https://192.168.39.102:8443/healthz ...
	I0505 20:59:48.421720   19551 api_server.go:279] https://192.168.39.102:8443/healthz returned 200:
	ok
	I0505 20:59:48.422596   19551 api_server.go:141] control plane version: v1.30.0
	I0505 20:59:48.422620   19551 api_server.go:131] duration metric: took 5.791761ms to wait for apiserver health ...
	I0505 20:59:48.422631   19551 system_pods.go:43] waiting for kube-system pods to appear ...
	I0505 20:59:48.431221   19551 system_pods.go:59] 18 kube-system pods found
	I0505 20:59:48.431248   19551 system_pods.go:61] "coredns-7db6d8ff4d-gnhf4" [230b69b2-9942-4035-bba5-637a32176daa] Running
	I0505 20:59:48.431255   19551 system_pods.go:61] "csi-hostpath-attacher-0" [9d360d83-ab63-48d2-969c-ef12d5ad5b99] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0505 20:59:48.431261   19551 system_pods.go:61] "csi-hostpath-resizer-0" [1b5b593c-dd13-4bd6-9692-a3b8ec11bcca] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0505 20:59:48.431269   19551 system_pods.go:61] "csi-hostpathplugin-nxl2f" [b71c6ae3-e8a1-49ac-b346-4d7e1a3053b6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0505 20:59:48.431274   19551 system_pods.go:61] "etcd-addons-476078" [7dcbb44a-bd07-4992-95c7-b1fd7be71ee4] Running
	I0505 20:59:48.431314   19551 system_pods.go:61] "kube-apiserver-addons-476078" [38eb3fa4-5e1a-444e-93f9-0ad0a88cb90f] Running
	I0505 20:59:48.431318   19551 system_pods.go:61] "kube-controller-manager-addons-476078" [fda15bec-4567-4ef6-b78a-ddfbb106d504] Running
	I0505 20:59:48.431322   19551 system_pods.go:61] "kube-ingress-dns-minikube" [92b9cc6b-903c-41c2-9101-cc4acb08ee22] Running
	I0505 20:59:48.431326   19551 system_pods.go:61] "kube-proxy-qrfs4" [b627b443-bc49-42d8-ae83-f6893f382003] Running
	I0505 20:59:48.431329   19551 system_pods.go:61] "kube-scheduler-addons-476078" [b0712527-df01-4ef7-a896-261278abedb9] Running
	I0505 20:59:48.431335   19551 system_pods.go:61] "metrics-server-c59844bb4-nsvl8" [8b3d4733-9d64-4587-9ed8-b33c78c6ccf0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0505 20:59:48.431341   19551 system_pods.go:61] "nvidia-device-plugin-daemonset-4s79g" [b7211778-f5aa-4ebe-973a-ac4ee0054143] Running
	I0505 20:59:48.431347   19551 system_pods.go:61] "registry-l4nvm" [6d3660b5-72f0-4cb8-850d-66e3367f0b2d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0505 20:59:48.431355   19551 system_pods.go:61] "registry-proxy-8z9cj" [2b07c767-5f91-4286-b104-2fd55988d9ad] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0505 20:59:48.431363   19551 system_pods.go:61] "snapshot-controller-745499f584-69vg6" [65bdd394-ec86-4879-b54e-cea00657265d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0505 20:59:48.431373   19551 system_pods.go:61] "snapshot-controller-745499f584-drspx" [5c863640-d719-499b-bfcb-0f89b84bcda9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0505 20:59:48.431378   19551 system_pods.go:61] "storage-provisioner" [fd6ddb95-4a5d-4ac3-8b8f-ccdbd02cdce3] Running
	I0505 20:59:48.431386   19551 system_pods.go:61] "tiller-deploy-6677d64bcd-2tngp" [9e6ccc20-fbbd-4495-a454-2e47945c33dc] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0505 20:59:48.431392   19551 system_pods.go:74] duration metric: took 8.75363ms to wait for pod list to return data ...
	I0505 20:59:48.431401   19551 default_sa.go:34] waiting for default service account to be created ...
	I0505 20:59:48.433736   19551 default_sa.go:45] found service account: "default"
	I0505 20:59:48.433754   19551 default_sa.go:55] duration metric: took 2.34573ms for default service account to be created ...
	I0505 20:59:48.433761   19551 system_pods.go:116] waiting for k8s-apps to be running ...
	I0505 20:59:48.441698   19551 system_pods.go:86] 18 kube-system pods found
	I0505 20:59:48.441722   19551 system_pods.go:89] "coredns-7db6d8ff4d-gnhf4" [230b69b2-9942-4035-bba5-637a32176daa] Running
	I0505 20:59:48.441730   19551 system_pods.go:89] "csi-hostpath-attacher-0" [9d360d83-ab63-48d2-969c-ef12d5ad5b99] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0505 20:59:48.441736   19551 system_pods.go:89] "csi-hostpath-resizer-0" [1b5b593c-dd13-4bd6-9692-a3b8ec11bcca] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0505 20:59:48.441745   19551 system_pods.go:89] "csi-hostpathplugin-nxl2f" [b71c6ae3-e8a1-49ac-b346-4d7e1a3053b6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0505 20:59:48.441750   19551 system_pods.go:89] "etcd-addons-476078" [7dcbb44a-bd07-4992-95c7-b1fd7be71ee4] Running
	I0505 20:59:48.441755   19551 system_pods.go:89] "kube-apiserver-addons-476078" [38eb3fa4-5e1a-444e-93f9-0ad0a88cb90f] Running
	I0505 20:59:48.441760   19551 system_pods.go:89] "kube-controller-manager-addons-476078" [fda15bec-4567-4ef6-b78a-ddfbb106d504] Running
	I0505 20:59:48.441764   19551 system_pods.go:89] "kube-ingress-dns-minikube" [92b9cc6b-903c-41c2-9101-cc4acb08ee22] Running
	I0505 20:59:48.441768   19551 system_pods.go:89] "kube-proxy-qrfs4" [b627b443-bc49-42d8-ae83-f6893f382003] Running
	I0505 20:59:48.441772   19551 system_pods.go:89] "kube-scheduler-addons-476078" [b0712527-df01-4ef7-a896-261278abedb9] Running
	I0505 20:59:48.441778   19551 system_pods.go:89] "metrics-server-c59844bb4-nsvl8" [8b3d4733-9d64-4587-9ed8-b33c78c6ccf0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0505 20:59:48.441785   19551 system_pods.go:89] "nvidia-device-plugin-daemonset-4s79g" [b7211778-f5aa-4ebe-973a-ac4ee0054143] Running
	I0505 20:59:48.441792   19551 system_pods.go:89] "registry-l4nvm" [6d3660b5-72f0-4cb8-850d-66e3367f0b2d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0505 20:59:48.441797   19551 system_pods.go:89] "registry-proxy-8z9cj" [2b07c767-5f91-4286-b104-2fd55988d9ad] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0505 20:59:48.441804   19551 system_pods.go:89] "snapshot-controller-745499f584-69vg6" [65bdd394-ec86-4879-b54e-cea00657265d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0505 20:59:48.441810   19551 system_pods.go:89] "snapshot-controller-745499f584-drspx" [5c863640-d719-499b-bfcb-0f89b84bcda9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0505 20:59:48.441817   19551 system_pods.go:89] "storage-provisioner" [fd6ddb95-4a5d-4ac3-8b8f-ccdbd02cdce3] Running
	I0505 20:59:48.441822   19551 system_pods.go:89] "tiller-deploy-6677d64bcd-2tngp" [9e6ccc20-fbbd-4495-a454-2e47945c33dc] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0505 20:59:48.441828   19551 system_pods.go:126] duration metric: took 8.061296ms to wait for k8s-apps to be running ...
	I0505 20:59:48.441835   19551 system_svc.go:44] waiting for kubelet service to be running ....
	I0505 20:59:48.441871   19551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 20:59:48.458387   19551 system_svc.go:56] duration metric: took 16.545926ms WaitForService to wait for kubelet
	I0505 20:59:48.458409   19551 kubeadm.go:576] duration metric: took 31.904483648s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0505 20:59:48.458431   19551 node_conditions.go:102] verifying NodePressure condition ...
	I0505 20:59:48.460892   19551 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0505 20:59:48.460919   19551 node_conditions.go:123] node cpu capacity is 2
	I0505 20:59:48.460933   19551 node_conditions.go:105] duration metric: took 2.497185ms to run NodePressure ...
	I0505 20:59:48.460946   19551 start.go:240] waiting for startup goroutines ...
	I0505 20:59:48.476016   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:48.706131   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:48.873832   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:48.895286   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:48.976339   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:49.208194   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:49.372655   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:49.395655   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:49.476842   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:49.705361   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:49.872493   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:49.894091   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:49.975994   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:50.206244   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:50.372756   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:50.395126   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:50.476631   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:50.705579   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:50.872222   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:50.897032   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:50.979617   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:51.206318   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:51.372477   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:51.398801   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:51.476198   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:51.707013   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:51.872278   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:51.895116   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:51.975926   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:52.205766   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:52.371794   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:52.394529   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:52.476102   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:52.705255   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:52.871943   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:52.895345   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:52.976688   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:53.205684   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:53.371976   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:53.394593   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:53.476181   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:53.706475   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:53.872162   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:53.895944   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:53.975751   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:54.205589   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:54.371624   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:54.394635   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:54.476865   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:54.705803   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:54.872852   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:54.895547   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:54.977636   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:55.206048   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:55.372379   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:55.394543   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:55.477506   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:55.706341   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:55.872722   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:55.894672   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:55.976398   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:56.205888   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:56.372118   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:56.395062   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:56.476355   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:56.707024   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:57.361989   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:57.362619   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:57.365787   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:57.366981   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:57.377252   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:57.394850   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:57.477261   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:57.709931   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:57.872655   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:57.895665   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:57.976241   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:58.205684   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:58.372654   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:58.395112   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:58.475915   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:58.706213   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:58.872413   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:58.894833   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:58.976343   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:59.205664   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:59.371904   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:59.395372   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:59.475754   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 20:59:59.704872   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 20:59:59.872317   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 20:59:59.893916   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 20:59:59.976477   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:00.212672   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 21:00:00.372908   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:00.395083   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:00.475348   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:00.704776   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 21:00:00.872105   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:00.894744   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:00.983181   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:01.205714   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 21:00:01.372300   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:01.394560   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:01.476855   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:01.716326   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 21:00:01.872728   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:01.895359   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:01.976938   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:02.206224   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 21:00:02.373961   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:02.395304   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:02.476664   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:02.705588   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 21:00:02.872005   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:02.895510   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:02.976832   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:03.205277   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 21:00:03.372411   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:03.394832   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:03.480477   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:03.704700   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 21:00:03.873874   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:03.896766   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:03.977848   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:04.206441   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 21:00:04.372895   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:04.395022   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:04.476534   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:04.712958   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 21:00:04.873422   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:04.894489   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:04.976815   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:05.206766   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 21:00:05.371963   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:05.395499   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:05.476276   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:05.705351   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 21:00:05.872611   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:05.895156   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:05.976504   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:06.206256   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 21:00:06.373021   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:06.395752   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:06.478042   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:06.706203   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 21:00:06.872787   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:06.897026   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:06.976483   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:07.206256   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 21:00:07.372771   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:07.395205   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:07.476036   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:07.706536   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 21:00:08.200972   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:08.201015   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:08.204336   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:08.209546   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 21:00:08.372031   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:08.395014   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:08.476966   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:08.710237   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 21:00:08.872878   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:08.895623   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:08.977240   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:09.205557   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 21:00:09.374122   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:09.395257   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:09.476731   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:09.706060   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 21:00:09.872531   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:09.894328   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:09.976576   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:10.207796   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 21:00:10.572230   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:10.573007   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:10.574855   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:10.706334   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 21:00:10.872812   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:10.895266   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:10.976459   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:11.206326   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0505 21:00:11.375881   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:11.398492   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:11.477239   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:11.705566   19551 kapi.go:107] duration metric: took 48.005096188s to wait for kubernetes.io/minikube-addons=registry ...
	I0505 21:00:11.872559   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:11.894919   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:11.977606   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:12.372444   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:12.394933   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:12.477502   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:12.872822   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:12.896877   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:12.981425   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:13.374173   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:13.397418   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:13.476421   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:13.873375   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:13.897002   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:13.976047   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:14.372515   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:14.394716   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:14.476985   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:14.872551   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:14.894714   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:14.976318   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:15.372854   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:15.395905   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:15.485348   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:15.873772   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:15.897978   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:15.979356   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:16.374915   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:16.395725   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:16.477421   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:16.872591   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:16.895678   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:16.977815   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:17.372054   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:17.395589   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:17.481134   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:17.872323   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:17.895166   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:17.976395   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:18.395630   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:18.398508   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:18.480130   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:18.875875   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:18.895555   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:18.976527   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:19.371738   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:19.394556   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:19.476300   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:19.875356   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:19.901203   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:19.979245   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:20.371966   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:20.395521   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:20.476642   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:20.871975   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:20.894997   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:20.976243   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:21.372896   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:21.395443   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:21.476719   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:21.872938   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:21.895282   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:21.977365   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:22.372343   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:22.394680   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:22.477925   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:22.875693   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:22.896402   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:22.977296   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:23.375360   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:23.397828   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:23.477271   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:23.876793   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:23.898179   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:23.976657   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:24.372011   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:24.395093   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:24.476209   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:24.872276   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:24.894398   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:24.976751   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:25.372519   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:25.394762   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:25.477645   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:25.876339   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:25.893958   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:25.977283   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:26.373062   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:26.395403   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:26.477176   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:26.873152   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:26.895733   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:26.977392   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:27.372831   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:27.395331   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:27.485653   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:27.872294   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:27.895868   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:27.978707   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:28.374542   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:28.396389   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:28.477994   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:28.871788   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:28.895129   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:28.976543   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:29.372481   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:29.394874   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:29.475490   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:29.873230   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:29.895983   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:29.979901   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:30.371683   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:30.394545   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:30.613509   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:30.875422   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:30.894466   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:30.976768   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:31.371757   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:31.396805   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:31.478047   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:32.075306   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:32.076954   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:32.077787   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:32.372371   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:32.394568   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:32.480667   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:32.872016   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:32.894953   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:32.975078   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:33.371971   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:33.395180   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:33.476229   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:33.872758   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:33.962673   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:33.979301   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:34.372217   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:34.395457   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:34.483069   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:34.873519   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:34.895999   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:34.976533   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:35.372622   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:35.395351   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:35.477243   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:35.872585   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:35.895360   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:35.976426   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:36.373400   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:36.397022   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:36.475568   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:36.872138   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:36.895872   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:36.976217   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:37.372507   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:37.394526   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:37.476468   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:37.872750   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:37.895792   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:37.976631   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:38.371893   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:38.395209   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:38.477223   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:38.872638   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:38.895212   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:38.976295   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:39.372777   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:39.395621   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:39.476809   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:39.872967   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:39.895731   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:39.976847   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:40.374462   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:40.393712   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:40.480600   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:40.879626   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:40.906032   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:40.979764   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:41.376816   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:41.417998   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:41.477041   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:41.872767   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:41.895358   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:41.976463   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:42.594827   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:42.595433   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:42.598951   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:42.872481   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:42.895256   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:42.979282   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:43.372155   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:43.397489   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:43.476696   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:43.872922   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:43.895327   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:43.983066   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:44.393785   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:44.400949   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:44.490075   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:44.878780   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:44.895440   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:44.975595   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:45.371842   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:45.394909   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:45.476483   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:45.879241   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:45.895729   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:45.976943   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:46.373235   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:46.395742   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:46.476480   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:47.145172   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:47.146189   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:47.146750   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:47.372876   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:47.395445   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:47.476052   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:47.872404   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:47.895793   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:47.977179   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:48.372652   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:48.396002   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:48.476270   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:48.872136   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:48.895798   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:48.986167   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:49.375068   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:49.397732   19551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0505 21:00:49.475569   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:49.873407   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:49.895846   19551 kapi.go:107] duration metric: took 1m24.505927196s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0505 21:00:49.977340   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:50.374177   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:50.484241   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:50.873407   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:50.976219   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:51.620060   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:51.621805   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:51.873271   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:51.979719   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:52.372887   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:52.476720   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:52.871606   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:52.979739   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:53.372517   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:53.479280   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:53.875703   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:53.977419   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:54.373690   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:54.478148   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:54.874156   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:54.986500   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:55.374542   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:55.477032   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:55.871852   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:55.977545   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:56.373714   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:56.476604   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:56.873197   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:56.977061   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0505 21:00:57.374304   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:57.477398   19551 kapi.go:107] duration metric: took 1m29.507231519s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0505 21:00:57.871936   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:58.373530   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:58.873409   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:59.373070   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:00:59.872263   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:00.373296   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:00.872446   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:01.374789   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:01.872857   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:02.372371   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:02.874583   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:03.373206   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:03.872123   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:04.372865   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:04.874268   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:05.372491   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:05.875068   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:06.372353   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:06.873503   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:07.372210   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:07.873522   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:08.372609   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:08.874975   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:09.373074   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:09.956295   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:10.372457   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:10.874097   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:11.372220   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:11.874624   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:12.372881   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:12.872477   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:13.372612   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:13.875055   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:14.372867   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:14.872576   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:15.372833   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:15.872862   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:16.374146   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:16.872248   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:17.372633   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:17.873003   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:18.372677   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:18.873528   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:19.374925   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:19.872768   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:20.373580   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:20.873149   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:21.372685   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:21.873024   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:22.372673   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:22.872636   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:23.374715   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:23.872798   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:24.373137   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:24.872673   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:25.372785   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:25.873414   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:26.372395   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:26.872330   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:27.372458   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:27.872507   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:28.373498   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:28.873219   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:29.373505   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:29.872753   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:30.374902   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:30.872930   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:31.371925   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:31.873501   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:32.372667   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:32.873040   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:33.372501   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:33.873776   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:34.372785   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:34.872109   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:35.372382   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:35.872186   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:36.372648   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:36.873061   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:37.372198   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:37.876703   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:38.373285   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:38.872726   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:39.372707   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:39.872170   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:40.373465   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:40.872511   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:41.373335   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:41.872628   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:42.373539   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:42.872999   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:43.371603   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:43.872566   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:44.373086   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:44.872179   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:45.373444   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:45.872549   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:46.372625   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:46.872527   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:47.373455   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:47.871880   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:48.371699   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:48.873343   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:49.372440   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:49.872823   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:50.371894   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:50.872910   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:51.371840   19551 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0505 21:01:51.872870   19551 kapi.go:107] duration metric: took 2m21.00479971s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0505 21:01:51.874704   19551 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-476078 cluster.
	I0505 21:01:51.876328   19551 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0505 21:01:51.877606   19551 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0505 21:01:51.879152   19551 out.go:177] * Enabled addons: ingress-dns, nvidia-device-plugin, storage-provisioner, cloud-spanner, storage-provisioner-rancher, metrics-server, inspektor-gadget, helm-tiller, yakd, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0505 21:01:51.880563   19551 addons.go:510] duration metric: took 2m35.326593725s for enable addons: enabled=[ingress-dns nvidia-device-plugin storage-provisioner cloud-spanner storage-provisioner-rancher metrics-server inspektor-gadget helm-tiller yakd volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0505 21:01:51.880614   19551 start.go:245] waiting for cluster config update ...
	I0505 21:01:51.880632   19551 start.go:254] writing updated cluster config ...
	I0505 21:01:51.880920   19551 ssh_runner.go:195] Run: rm -f paused
	I0505 21:01:51.935210   19551 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0505 21:01:51.937186   19551 out.go:177] * Done! kubectl is now configured to use "addons-476078" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	May 05 21:08:01 addons-476078 crio[679]: time="2024-05-05 21:08:01.541372072Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b99ea752b7cad46711807229f07d8f43a6fb4ef08b22d378e07d5eda579a58c3,PodSandboxId:088fffe955b6fb98bbdfa224bc4b0057178baa3a3c9a9adc41602198d7b761e2,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1714943098747125803,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-28xbq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fca35d04-feb9-4aa8-b28e-582ccdde30b3,},Annotations:map[string]string{io.kubernetes.container.hash: e1ec41f,io.kubernetes.container
.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4875e65576390576ddc7bb10fe9f4a135c15f48c22e01f2e26ec76fcea8e3f2d,PodSandboxId:930548ca74204279807e110df6241d75f3a71928df046856904881f918d49a15,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:dd524baac105f5353429a7022c26a02c8c80d95a50cb4d34b6e19a3a4289ff88,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f4215f6ee683f29c0a4611b02d1adc3b7d986a96ab894eb5f7b9437c862c9499,State:CONTAINER_RUNNING,CreatedAt:1714942958350024556,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ddd318b3-f460-41a7-8b57-def112b59f42,},Annotations:map[string]string{io.kuberne
tes.container.hash: 71ff289f,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33d73a3c5fb17d80f5ac83f20ab31b627b7313bb0271ca970f606bc20cc744a1,PodSandboxId:4ad089ecea889b49f9f3583a6616859766f27cad39ed476ceebbfba068396c7c,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1714942922198607959,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7559bf459f-9tvbl,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 0a342843-0e7b-4235-8a87-1ab68db8e982,},Annotations:map[string]string{io.kubernetes.container.hash: 339662e5,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7444e66e637089e620537867e5aee0a435c4cddc878105392e4a9b23574d252a,PodSandboxId:32345da98def5801f0a61a844ee21ae1988070fb778e54314d210352304c49b7,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1714942911152960461,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-j6g6c,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 350b4f6a-6a3b-404f-813f-84fd686ecd8b,},Annotations:map[string]string{io.kubernetes.container.hash: 85d162ce,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f9de9d8ab45c00db0b8ef19b6f7edc9c34c1df029fe91f4f5e4ce2ea80d6c7f,PodSandboxId:ce6559bbc0c22fb1d31e049017719339f47aec95475e4a24596865bb2a6ca094,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:171494
2832163888123,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-2nv87,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 6020ab74-7313-45e6-8080-4e84b676efe6,},Annotations:map[string]string{io.kubernetes.container.hash: 80e8b8e4,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3330869b88ae34030adfb1045b69244f15ff7c7ba74934c355a0cda33b2420df,PodSandboxId:fe1bb8afdd7beb0557defab9cbe03bfe4bd893bd89e05337f80341326822577c,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1714942800071785995,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-nsvl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b3d4733-9d64-4587-9ed8-b33c78c6ccf0,},Annotations:map[string]string{io.kubernetes.container.hash: 924e7843,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd225ed77802f8083a3f863e503a7d3e8feb5443321fc208a3b8a0addb06f9a4,PodSandboxId:c9860ed473d4858386b69e8d662426e54a7450884a6ab91e1a8705cb9b3a6e4d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f561734
2c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714942768900464464,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd6ddb95-4a5d-4ac3-8b8f-ccdbd02cdce3,},Annotations:map[string]string{io.kubernetes.container.hash: f9eee02d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9645db293186a9f14b627c3dc3e3f97c0c6f3d18fbb7b2bba892f8cb5b05948,PodSandboxId:b0dd0025b9663eadd825753c4fa81257b86a6115a6c63bb5159ade58fdff06e6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c007
97ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714942761309610772,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gnhf4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 230b69b2-9942-4035-bba5-637a32176daa,},Annotations:map[string]string{io.kubernetes.container.hash: 27a2bfe1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbf845eacbdf1b723d1056d0e6ebbb49119b17090e3408dcf986522b20f20993,PodSand
boxId:51d78c16bbcdc39b6c1e9f90e2a00e4b80d4b66d9268652840e6686b95d322df,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714942759200801790,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qrfs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b627b443-bc49-42d8-ae83-f6893f382003,},Annotations:map[string]string{io.kubernetes.container.hash: 5382cea5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b082aa54a6b8512d16fa745f2913d0cc46f3bfc8f258886e658467d8c4f95f,PodSandboxId:4de96b20bb9ef3046e406342d12
59f2165032c640bce1d4eeab12c65545372e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714942737426442242,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-476078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fa7b1eda3c5a600ae0b2a0ea78fb243,},Annotations:map[string]string{io.kubernetes.container.hash: ca6ce4b1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5647ed381b79011c0479628e434b280a42eaab6e7fa1c98f85ec48620a5f94f9,PodSandboxId:5d962e468c6ce947137c3b6849400443ce13c4c8941e2771802d2f27f37c948e,Metadata:
&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714942737312474166,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-476078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3920ed2d88d8c0d183cbbde1ee79949,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4465f04bcc1d8801e9187dca5b031fb1551b06f41604b619d0436433e2a65777,PodSandboxId:01e9bfa6c65ce73b9c2b5172b5d3c0256982c5fad1e1f4e8d850a5f5d74154e6,Metadata:&ContainerMetadat
a{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714942737302536455,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-476078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f2eeee73d76512f8cf103629b0adf8,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c82f83da0a7026479dc4c72e0319ebbcfc42e61eee427ce2b0ac46997d8e972,PodSandboxId:2645e8c72e081d1751645cb482df2b6f3508faf426f66c8714d26dc01f62aa09,Metadata:&Contai
nerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714942737338558826,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-476078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a682daeee39870862b84bb87f95a68c7,},Annotations:map[string]string{io.kubernetes.container.hash: a13feaa9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9a32e51e-710d-4825-ae02-061b7e7872a2 name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:08:01 addons-476078 crio[679]: time="2024-05-05 21:08:01.558273946Z" level=debug msg="Event: WRITE         \"/var/run/crio/exits/3330869b88ae34030adfb1045b69244f15ff7c7ba74934c355a0cda33b2420df.FFZLN2\"" file="server/server.go:805"
	May 05 21:08:01 addons-476078 crio[679]: time="2024-05-05 21:08:01.558326226Z" level=debug msg="Event: CREATE        \"/var/run/crio/exits/3330869b88ae34030adfb1045b69244f15ff7c7ba74934c355a0cda33b2420df.FFZLN2\"" file="server/server.go:805"
	May 05 21:08:01 addons-476078 crio[679]: time="2024-05-05 21:08:01.558346215Z" level=debug msg="Container or sandbox exited: 3330869b88ae34030adfb1045b69244f15ff7c7ba74934c355a0cda33b2420df.FFZLN2" file="server/server.go:810"
	May 05 21:08:01 addons-476078 crio[679]: time="2024-05-05 21:08:01.558375085Z" level=debug msg="Event: CREATE        \"/var/run/crio/exits/3330869b88ae34030adfb1045b69244f15ff7c7ba74934c355a0cda33b2420df\"" file="server/server.go:805"
	May 05 21:08:01 addons-476078 crio[679]: time="2024-05-05 21:08:01.558392867Z" level=debug msg="Container or sandbox exited: 3330869b88ae34030adfb1045b69244f15ff7c7ba74934c355a0cda33b2420df" file="server/server.go:810"
	May 05 21:08:01 addons-476078 crio[679]: time="2024-05-05 21:08:01.558411522Z" level=debug msg="container exited and found: 3330869b88ae34030adfb1045b69244f15ff7c7ba74934c355a0cda33b2420df" file="server/server.go:825"
	May 05 21:08:01 addons-476078 crio[679]: time="2024-05-05 21:08:01.558445890Z" level=debug msg="Event: RENAME        \"/var/run/crio/exits/3330869b88ae34030adfb1045b69244f15ff7c7ba74934c355a0cda33b2420df.FFZLN2\"" file="server/server.go:805"
	May 05 21:08:01 addons-476078 crio[679]: time="2024-05-05 21:08:01.565534639Z" level=debug msg="Unmounted container 3330869b88ae34030adfb1045b69244f15ff7c7ba74934c355a0cda33b2420df" file="storage/runtime.go:495" id=ba2f3731-3304-4e6c-865d-ce2756e76974 name=/runtime.v1.RuntimeService/StopContainer
	May 05 21:08:01 addons-476078 crio[679]: time="2024-05-05 21:08:01.590192698Z" level=debug msg="Found exit code for 3330869b88ae34030adfb1045b69244f15ff7c7ba74934c355a0cda33b2420df: 0" file="oci/runtime_oci.go:1022"
	May 05 21:08:01 addons-476078 crio[679]: time="2024-05-05 21:08:01.590410663Z" level=debug msg="Skipping status update for: &{State:{Version:1.0.2-dev ID:3330869b88ae34030adfb1045b69244f15ff7c7ba74934c355a0cda33b2420df Status:stopped Pid:0 Bundle:/run/containers/storage/overlay-containers/3330869b88ae34030adfb1045b69244f15ff7c7ba74934c355a0cda33b2420df/userdata Annotations:map[io.container.manager:cri-o io.kubernetes.container.hash:924e7843 io.kubernetes.container.name:metrics-server io.kubernetes.container.ports:[{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}] io.kubernetes.container.restartCount:0 io.kubernetes.container.terminationMessagePath:/dev/termination-log io.kubernetes.container.terminationMessagePolicy:File io.kubernetes.cri-o.Annotations:{\"io.kubernetes.container.hash\":\"924e7843\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"https\\\",\\\"containerPort\\\":4443,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.c
ontainer.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"} io.kubernetes.cri-o.ContainerID:3330869b88ae34030adfb1045b69244f15ff7c7ba74934c355a0cda33b2420df io.kubernetes.cri-o.ContainerType:container io.kubernetes.cri-o.Created:2024-05-05T21:00:00.071906165Z io.kubernetes.cri-o.IP.0:10.244.0.9 io.kubernetes.cri-o.Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872 io.kubernetes.cri-o.ImageName:registry.k8s.io/metrics-server/metrics-server@sha256:db3800085a0957083930c3932b17580eec652cfb6156a05c0f79c7543e80d17a io.kubernetes.cri-o.ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62 io.kubernetes.cri-o.Labels:{\"io.kubernetes.container.name\":\"metrics-server\",\"io.kubernetes.pod.name\":\"metrics-server-c59844bb4-nsvl8\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"8b3d4733-9
d64-4587-9ed8-b33c78c6ccf0\"} io.kubernetes.cri-o.LogPath:/var/log/pods/kube-system_metrics-server-c59844bb4-nsvl8_8b3d4733-9d64-4587-9ed8-b33c78c6ccf0/metrics-server/0.log io.kubernetes.cri-o.Metadata:{\"name\":\"metrics-server\"} io.kubernetes.cri-o.MountPoint:/var/lib/containers/storage/overlay/b3d74c6bdf0a324e299d788f806fc366ce52161a30665c995699216b948ad5da/merged io.kubernetes.cri-o.Name:k8s_metrics-server_metrics-server-c59844bb4-nsvl8_kube-system_8b3d4733-9d64-4587-9ed8-b33c78c6ccf0_0 io.kubernetes.cri-o.PlatformRuntimePath: io.kubernetes.cri-o.ResolvPath:/var/run/containers/storage/overlay-containers/fe1bb8afdd7beb0557defab9cbe03bfe4bd893bd89e05337f80341326822577c/userdata/resolv.conf io.kubernetes.cri-o.SandboxID:fe1bb8afdd7beb0557defab9cbe03bfe4bd893bd89e05337f80341326822577c io.kubernetes.cri-o.SandboxName:k8s_metrics-server-c59844bb4-nsvl8_kube-system_8b3d4733-9d64-4587-9ed8-b33c78c6ccf0_0 io.kubernetes.cri-o.SeccompProfilePath:Unconfined io.kubernetes.cri-o.Stdin:false io.kubernetes.cri-o.StdinOn
ce:false io.kubernetes.cri-o.TTY:false io.kubernetes.cri-o.Volumes:[{\"container_path\":\"/tmp\",\"host_path\":\"/var/lib/kubelet/pods/8b3d4733-9d64-4587-9ed8-b33c78c6ccf0/volumes/kubernetes.io~empty-dir/tmp-dir\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/8b3d4733-9d64-4587-9ed8-b33c78c6ccf0/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/8b3d4733-9d64-4587-9ed8-b33c78c6ccf0/containers/metrics-server/b586dcb8\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/8b3d4733-9d64-4587-9ed8-b33c78c6ccf0/volumes/kubernetes.io~projected/kube-api-access-hfj6v\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}] io.kubernetes.pod.name:metrics-server-c59844bb4-nsvl8 io.kubernetes.pod.na
mespace:kube-system io.kubernetes.pod.terminationGracePeriod:30 io.kubernetes.pod.uid:8b3d4733-9d64-4587-9ed8-b33c78c6ccf0 kubernetes.io/config.seen:2024-05-05T20:59:22.951196516Z kubernetes.io/config.source:api]} Created:2024-05-05 21:00:00.123267038 +0000 UTC Started:2024-05-05 21:00:00.160767799 +0000 UTC m=+72.490154419 Finished:2024-05-05 21:08:01.557452674 +0000 UTC ExitCode:0xc000fcd180 OOMKilled:false SeccompKilled:false Error: InitPid:4538 InitStartTime:9589 CheckpointedAt:0001-01-01 00:00:00 +0000 UTC}" file="oci/runtime_oci.go:946" id=ba2f3731-3304-4e6c-865d-ce2756e76974 name=/runtime.v1.RuntimeService/StopContainer
	May 05 21:08:01 addons-476078 crio[679]: time="2024-05-05 21:08:01.594902749Z" level=debug msg="Event: REMOVE        \"/var/run/crio/exits/3330869b88ae34030adfb1045b69244f15ff7c7ba74934c355a0cda33b2420df\"" file="server/server.go:805"
	May 05 21:08:01 addons-476078 crio[679]: time="2024-05-05 21:08:01.596485570Z" level=info msg="Stopped container 3330869b88ae34030adfb1045b69244f15ff7c7ba74934c355a0cda33b2420df: kube-system/metrics-server-c59844bb4-nsvl8/metrics-server" file="server/container_stop.go:29" id=ba2f3731-3304-4e6c-865d-ce2756e76974 name=/runtime.v1.RuntimeService/StopContainer
	May 05 21:08:01 addons-476078 crio[679]: time="2024-05-05 21:08:01.596572053Z" level=debug msg="Response: &StopContainerResponse{}" file="otel-collector/interceptors.go:74" id=ba2f3731-3304-4e6c-865d-ce2756e76974 name=/runtime.v1.RuntimeService/StopContainer
	May 05 21:08:01 addons-476078 crio[679]: time="2024-05-05 21:08:01.597212509Z" level=debug msg="Request: &StopPodSandboxRequest{PodSandboxId:fe1bb8afdd7beb0557defab9cbe03bfe4bd893bd89e05337f80341326822577c,}" file="otel-collector/interceptors.go:62" id=161ec8dd-e94f-42f9-a92c-bfffa5653169 name=/runtime.v1.RuntimeService/StopPodSandbox
	May 05 21:08:01 addons-476078 crio[679]: time="2024-05-05 21:08:01.597295258Z" level=info msg="Stopping pod sandbox: fe1bb8afdd7beb0557defab9cbe03bfe4bd893bd89e05337f80341326822577c" file="server/sandbox_stop.go:18" id=161ec8dd-e94f-42f9-a92c-bfffa5653169 name=/runtime.v1.RuntimeService/StopPodSandbox
	May 05 21:08:01 addons-476078 crio[679]: time="2024-05-05 21:08:01.597604135Z" level=info msg="Got pod network &{Name:metrics-server-c59844bb4-nsvl8 Namespace:kube-system ID:fe1bb8afdd7beb0557defab9cbe03bfe4bd893bd89e05337f80341326822577c UID:8b3d4733-9d64-4587-9ed8-b33c78c6ccf0 NetNS:/var/run/netns/3ff55cb0-dc1c-4ff7-8fd8-c465bf971947 Networks:[{Name:bridge Ifname:eth0}] RuntimeConfig:map[bridge:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath:/kubepods/burstable/pod8b3d4733-9d64-4587-9ed8-b33c78c6ccf0 PodAnnotations:0xc0018242f8}] Aliases:map[]}" file="ocicni/ocicni.go:795"
	May 05 21:08:01 addons-476078 crio[679]: time="2024-05-05 21:08:01.597942048Z" level=info msg="Deleting pod kube-system_metrics-server-c59844bb4-nsvl8 from CNI network \"bridge\" (type=bridge)" file="ocicni/ocicni.go:667"
	May 05 21:08:01 addons-476078 crio[679]: time="2024-05-05 21:08:01.609515022Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=09c46a81-66d4-47ec-b940-4392c4535f7f name=/runtime.v1.RuntimeService/Version
	May 05 21:08:01 addons-476078 crio[679]: time="2024-05-05 21:08:01.609598011Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=09c46a81-66d4-47ec-b940-4392c4535f7f name=/runtime.v1.RuntimeService/Version
	May 05 21:08:01 addons-476078 crio[679]: time="2024-05-05 21:08:01.616251392Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a1c846d7-da00-4634-96d3-d706719b3f33 name=/runtime.v1.ImageService/ImageFsInfo
	May 05 21:08:01 addons-476078 crio[679]: time="2024-05-05 21:08:01.617444279Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714943281617417913,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579668,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a1c846d7-da00-4634-96d3-d706719b3f33 name=/runtime.v1.ImageService/ImageFsInfo
	May 05 21:08:01 addons-476078 crio[679]: time="2024-05-05 21:08:01.618393670Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bd611064-8fe2-45e6-9a7b-32e07eeb7322 name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:08:01 addons-476078 crio[679]: time="2024-05-05 21:08:01.618449181Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bd611064-8fe2-45e6-9a7b-32e07eeb7322 name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:08:01 addons-476078 crio[679]: time="2024-05-05 21:08:01.618976599Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b99ea752b7cad46711807229f07d8f43a6fb4ef08b22d378e07d5eda579a58c3,PodSandboxId:088fffe955b6fb98bbdfa224bc4b0057178baa3a3c9a9adc41602198d7b761e2,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1714943098747125803,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-28xbq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fca35d04-feb9-4aa8-b28e-582ccdde30b3,},Annotations:map[string]string{io.kubernetes.container.hash: e1ec41f,io.kubernetes.container
.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4875e65576390576ddc7bb10fe9f4a135c15f48c22e01f2e26ec76fcea8e3f2d,PodSandboxId:930548ca74204279807e110df6241d75f3a71928df046856904881f918d49a15,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:dd524baac105f5353429a7022c26a02c8c80d95a50cb4d34b6e19a3a4289ff88,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f4215f6ee683f29c0a4611b02d1adc3b7d986a96ab894eb5f7b9437c862c9499,State:CONTAINER_RUNNING,CreatedAt:1714942958350024556,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ddd318b3-f460-41a7-8b57-def112b59f42,},Annotations:map[string]string{io.kuberne
tes.container.hash: 71ff289f,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33d73a3c5fb17d80f5ac83f20ab31b627b7313bb0271ca970f606bc20cc744a1,PodSandboxId:4ad089ecea889b49f9f3583a6616859766f27cad39ed476ceebbfba068396c7c,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1714942922198607959,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7559bf459f-9tvbl,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 0a342843-0e7b-4235-8a87-1ab68db8e982,},Annotations:map[string]string{io.kubernetes.container.hash: 339662e5,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7444e66e637089e620537867e5aee0a435c4cddc878105392e4a9b23574d252a,PodSandboxId:32345da98def5801f0a61a844ee21ae1988070fb778e54314d210352304c49b7,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1714942911152960461,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-j6g6c,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 350b4f6a-6a3b-404f-813f-84fd686ecd8b,},Annotations:map[string]string{io.kubernetes.container.hash: 85d162ce,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f9de9d8ab45c00db0b8ef19b6f7edc9c34c1df029fe91f4f5e4ce2ea80d6c7f,PodSandboxId:ce6559bbc0c22fb1d31e049017719339f47aec95475e4a24596865bb2a6ca094,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:171494
2832163888123,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-2nv87,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 6020ab74-7313-45e6-8080-4e84b676efe6,},Annotations:map[string]string{io.kubernetes.container.hash: 80e8b8e4,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3330869b88ae34030adfb1045b69244f15ff7c7ba74934c355a0cda33b2420df,PodSandboxId:fe1bb8afdd7beb0557defab9cbe03bfe4bd893bd89e05337f80341326822577c,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_EXITED,CreatedAt:1714942800071785995,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-nsvl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b3d4733-9d64-4587-9ed8-b33c78c6ccf0,},Annotations:map[string]string{io.kubernetes.container.hash: 924e7843,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd225ed77802f8083a3f863e503a7d3e8feb5443321fc208a3b8a0addb06f9a4,PodSandboxId:c9860ed473d4858386b69e8d662426e54a7450884a6ab91e1a8705cb9b3a6e4d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342
c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714942768900464464,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd6ddb95-4a5d-4ac3-8b8f-ccdbd02cdce3,},Annotations:map[string]string{io.kubernetes.container.hash: f9eee02d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9645db293186a9f14b627c3dc3e3f97c0c6f3d18fbb7b2bba892f8cb5b05948,PodSandboxId:b0dd0025b9663eadd825753c4fa81257b86a6115a6c63bb5159ade58fdff06e6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c0079
7ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714942761309610772,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gnhf4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 230b69b2-9942-4035-bba5-637a32176daa,},Annotations:map[string]string{io.kubernetes.container.hash: 27a2bfe1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbf845eacbdf1b723d1056d0e6ebbb49119b17090e3408dcf986522b20f20993,PodSandb
oxId:51d78c16bbcdc39b6c1e9f90e2a00e4b80d4b66d9268652840e6686b95d322df,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714942759200801790,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qrfs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b627b443-bc49-42d8-ae83-f6893f382003,},Annotations:map[string]string{io.kubernetes.container.hash: 5382cea5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37b082aa54a6b8512d16fa745f2913d0cc46f3bfc8f258886e658467d8c4f95f,PodSandboxId:4de96b20bb9ef3046e406342d125
9f2165032c640bce1d4eeab12c65545372e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714942737426442242,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-476078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fa7b1eda3c5a600ae0b2a0ea78fb243,},Annotations:map[string]string{io.kubernetes.container.hash: ca6ce4b1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5647ed381b79011c0479628e434b280a42eaab6e7fa1c98f85ec48620a5f94f9,PodSandboxId:5d962e468c6ce947137c3b6849400443ce13c4c8941e2771802d2f27f37c948e,Metadata:&
ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714942737312474166,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-476078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3920ed2d88d8c0d183cbbde1ee79949,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4465f04bcc1d8801e9187dca5b031fb1551b06f41604b619d0436433e2a65777,PodSandboxId:01e9bfa6c65ce73b9c2b5172b5d3c0256982c5fad1e1f4e8d850a5f5d74154e6,Metadata:&ContainerMetadata
{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714942737302536455,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-476078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f2eeee73d76512f8cf103629b0adf8,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c82f83da0a7026479dc4c72e0319ebbcfc42e61eee427ce2b0ac46997d8e972,PodSandboxId:2645e8c72e081d1751645cb482df2b6f3508faf426f66c8714d26dc01f62aa09,Metadata:&Contain
erMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714942737338558826,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-476078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a682daeee39870862b84bb87f95a68c7,},Annotations:map[string]string{io.kubernetes.container.hash: a13feaa9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bd611064-8fe2-45e6-9a7b-32e07eeb7322 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b99ea752b7cad       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                 3 minutes ago       Running             hello-world-app           0                   088fffe955b6f       hello-world-app-86c47465fc-28xbq
	4875e65576390       docker.io/library/nginx@sha256:dd524baac105f5353429a7022c26a02c8c80d95a50cb4d34b6e19a3a4289ff88                         5 minutes ago       Running             nginx                     0                   930548ca74204       nginx
	33d73a3c5fb17       ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f                   5 minutes ago       Running             headlamp                  0                   4ad089ecea889       headlamp-7559bf459f-9tvbl
	7444e66e63708       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b            6 minutes ago       Running             gcp-auth                  0                   32345da98def5       gcp-auth-5db96cd9b4-j6g6c
	0f9de9d8ab45c       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                         7 minutes ago       Running             yakd                      0                   ce6559bbc0c22       yakd-dashboard-5ddbf7d777-2nv87
	3330869b88ae3       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   8 minutes ago       Exited              metrics-server            0                   fe1bb8afdd7be       metrics-server-c59844bb4-nsvl8
	dd225ed77802f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        8 minutes ago       Running             storage-provisioner       0                   c9860ed473d48       storage-provisioner
	b9645db293186       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        8 minutes ago       Running             coredns                   0                   b0dd0025b9663       coredns-7db6d8ff4d-gnhf4
	bbf845eacbdf1       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                                        8 minutes ago       Running             kube-proxy                0                   51d78c16bbcdc       kube-proxy-qrfs4
	37b082aa54a6b       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                        9 minutes ago       Running             etcd                      0                   4de96b20bb9ef       etcd-addons-476078
	7c82f83da0a70       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                                        9 minutes ago       Running             kube-apiserver            0                   2645e8c72e081       kube-apiserver-addons-476078
	5647ed381b790       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                                        9 minutes ago       Running             kube-scheduler            0                   5d962e468c6ce       kube-scheduler-addons-476078
	4465f04bcc1d8       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                                        9 minutes ago       Running             kube-controller-manager   0                   01e9bfa6c65ce       kube-controller-manager-addons-476078
	
	
	==> coredns [b9645db293186a9f14b627c3dc3e3f97c0c6f3d18fbb7b2bba892f8cb5b05948] <==
	[INFO] 10.244.0.7:60000 - 17304 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000288916s
	[INFO] 10.244.0.7:42673 - 1271 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00010519s
	[INFO] 10.244.0.7:42673 - 23029 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000042233s
	[INFO] 10.244.0.7:60915 - 55580 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0000622s
	[INFO] 10.244.0.7:60915 - 13074 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000102583s
	[INFO] 10.244.0.7:53029 - 44331 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000070472s
	[INFO] 10.244.0.7:53029 - 22581 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000060784s
	[INFO] 10.244.0.7:51014 - 58299 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000254878s
	[INFO] 10.244.0.7:51014 - 18247 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00026172s
	[INFO] 10.244.0.7:44546 - 6369 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000298138s
	[INFO] 10.244.0.7:44546 - 58083 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000076978s
	[INFO] 10.244.0.7:43853 - 55745 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000080383s
	[INFO] 10.244.0.7:43853 - 11724 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000034283s
	[INFO] 10.244.0.7:59564 - 34109 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000090581s
	[INFO] 10.244.0.7:59564 - 60987 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00038734s
	[INFO] 10.244.0.22:44000 - 21608 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000535313s
	[INFO] 10.244.0.22:53522 - 50801 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000162607s
	[INFO] 10.244.0.22:46779 - 35167 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000207779s
	[INFO] 10.244.0.22:37589 - 17861 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000147137s
	[INFO] 10.244.0.22:48892 - 16730 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000160378s
	[INFO] 10.244.0.22:60215 - 3806 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000120841s
	[INFO] 10.244.0.22:35297 - 63835 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001541316s
	[INFO] 10.244.0.22:39888 - 6098 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.001612657s
	[INFO] 10.244.0.26:49412 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000409049s
	[INFO] 10.244.0.26:40237 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000094033s
	
	
	==> describe nodes <==
	Name:               addons-476078
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-476078
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=182cbbc99574885c654f8e32902368a71f76ddd3
	                    minikube.k8s.io/name=addons-476078
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_05T20_59_03_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-476078
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 05 May 2024 20:58:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-476078
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 05 May 2024 21:07:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 05 May 2024 21:05:10 +0000   Sun, 05 May 2024 20:58:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 05 May 2024 21:05:10 +0000   Sun, 05 May 2024 20:58:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 05 May 2024 21:05:10 +0000   Sun, 05 May 2024 20:58:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 05 May 2024 21:05:10 +0000   Sun, 05 May 2024 20:59:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.102
	  Hostname:    addons-476078
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 cb49be14c4984910ab2cbdb5bb38e82c
	  System UUID:                cb49be14-c498-4910-ab2c-bdb5bb38e82c
	  Boot ID:                    49930f1f-b9dc-45c3-8200-621abad2788b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-86c47465fc-28xbq         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m7s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m28s
	  gcp-auth                    gcp-auth-5db96cd9b4-j6g6c                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m31s
	  headlamp                    headlamp-7559bf459f-9tvbl                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m8s
	  kube-system                 coredns-7db6d8ff4d-gnhf4                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     8m45s
	  kube-system                 etcd-addons-476078                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         9m1s
	  kube-system                 kube-apiserver-addons-476078             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m58s
	  kube-system                 kube-controller-manager-addons-476078    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m58s
	  kube-system                 kube-proxy-qrfs4                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m45s
	  kube-system                 kube-scheduler-addons-476078             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m58s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m38s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-2nv87          0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     8m37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (7%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 8m41s  kube-proxy       
	  Normal  Starting                 8m58s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m58s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m58s  kubelet          Node addons-476078 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m58s  kubelet          Node addons-476078 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m58s  kubelet          Node addons-476078 status is now: NodeHasSufficientPID
	  Normal  NodeReady                8m57s  kubelet          Node addons-476078 status is now: NodeReady
	  Normal  RegisteredNode           8m45s  node-controller  Node addons-476078 event: Registered Node addons-476078 in Controller
	
	
	==> dmesg <==
	[  +6.163970] kauditd_printk_skb: 139 callbacks suppressed
	[ +14.663027] kauditd_printk_skb: 19 callbacks suppressed
	[  +9.929351] kauditd_printk_skb: 2 callbacks suppressed
	[May 5 21:00] kauditd_printk_skb: 4 callbacks suppressed
	[ +21.638290] kauditd_printk_skb: 28 callbacks suppressed
	[  +6.478882] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.690122] kauditd_printk_skb: 25 callbacks suppressed
	[  +5.369762] kauditd_printk_skb: 51 callbacks suppressed
	[  +5.447949] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.032700] kauditd_printk_skb: 16 callbacks suppressed
	[May 5 21:01] kauditd_printk_skb: 4 callbacks suppressed
	[ +29.959141] kauditd_printk_skb: 26 callbacks suppressed
	[ +13.438532] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.579573] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.034675] kauditd_printk_skb: 23 callbacks suppressed
	[May 5 21:02] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.015891] kauditd_printk_skb: 22 callbacks suppressed
	[  +6.364459] kauditd_printk_skb: 70 callbacks suppressed
	[  +8.698911] kauditd_printk_skb: 15 callbacks suppressed
	[  +7.870348] kauditd_printk_skb: 31 callbacks suppressed
	[  +7.354942] kauditd_printk_skb: 24 callbacks suppressed
	[  +9.051246] kauditd_printk_skb: 22 callbacks suppressed
	[  +9.537837] kauditd_printk_skb: 33 callbacks suppressed
	[May 5 21:04] kauditd_printk_skb: 6 callbacks suppressed
	[May 5 21:05] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [37b082aa54a6b8512d16fa745f2913d0cc46f3bfc8f258886e658467d8c4f95f] <==
	{"level":"info","ts":"2024-05-05T21:00:51.592043Z","caller":"traceutil/trace.go:171","msg":"trace[61689317] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1129; }","duration":"244.338543ms","start":"2024-05-05T21:00:51.347697Z","end":"2024-05-05T21:00:51.592035Z","steps":["trace[61689317] 'agreement among raft nodes before linearized reading'  (duration: 244.196676ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-05T21:00:51.592157Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"142.148411ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85552"}
	{"level":"info","ts":"2024-05-05T21:00:51.592215Z","caller":"traceutil/trace.go:171","msg":"trace[1674021293] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1129; }","duration":"142.22488ms","start":"2024-05-05T21:00:51.449982Z","end":"2024-05-05T21:00:51.592207Z","steps":["trace[1674021293] 'agreement among raft nodes before linearized reading'  (duration: 142.043916ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-05T21:00:51.592011Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"301.634756ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-05T21:00:51.592281Z","caller":"traceutil/trace.go:171","msg":"trace[1423719107] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1129; }","duration":"301.925736ms","start":"2024-05-05T21:00:51.290347Z","end":"2024-05-05T21:00:51.592273Z","steps":["trace[1423719107] 'agreement among raft nodes before linearized reading'  (duration: 301.639037ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-05T21:00:51.592321Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-05T21:00:51.290334Z","time spent":"301.978989ms","remote":"127.0.0.1:45388","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-05-05T21:01:09.929143Z","caller":"traceutil/trace.go:171","msg":"trace[2006913862] linearizableReadLoop","detail":"{readStateIndex:1246; appliedIndex:1245; }","duration":"186.506447ms","start":"2024-05-05T21:01:09.742589Z","end":"2024-05-05T21:01:09.929096Z","steps":["trace[2006913862] 'read index received'  (duration: 186.344887ms)","trace[2006913862] 'applied index is now lower than readState.Index'  (duration: 161.03µs)"],"step_count":2}
	{"level":"info","ts":"2024-05-05T21:01:09.929447Z","caller":"traceutil/trace.go:171","msg":"trace[849019111] transaction","detail":"{read_only:false; response_revision:1204; number_of_response:1; }","duration":"230.096059ms","start":"2024-05-05T21:01:09.699337Z","end":"2024-05-05T21:01:09.929433Z","steps":["trace[849019111] 'process raft request'  (duration: 229.647647ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-05T21:01:09.92954Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.308789ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/\" range_end:\"/registry/daemonsets0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-05-05T21:01:09.930846Z","caller":"traceutil/trace.go:171","msg":"trace[1965804214] range","detail":"{range_begin:/registry/daemonsets/; range_end:/registry/daemonsets0; response_count:0; response_revision:1204; }","duration":"110.70195ms","start":"2024-05-05T21:01:09.820131Z","end":"2024-05-05T21:01:09.930833Z","steps":["trace[1965804214] 'agreement among raft nodes before linearized reading'  (duration: 109.305294ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-05T21:01:09.929752Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"187.154335ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2024-05-05T21:01:09.931115Z","caller":"traceutil/trace.go:171","msg":"trace[1246568627] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1204; }","duration":"188.550432ms","start":"2024-05-05T21:01:09.742554Z","end":"2024-05-05T21:01:09.931104Z","steps":["trace[1246568627] 'agreement among raft nodes before linearized reading'  (duration: 187.011489ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-05T21:02:01.148015Z","caller":"traceutil/trace.go:171","msg":"trace[1692051660] linearizableReadLoop","detail":"{readStateIndex:1415; appliedIndex:1414; }","duration":"256.678693ms","start":"2024-05-05T21:02:00.891309Z","end":"2024-05-05T21:02:01.147988Z","steps":["trace[1692051660] 'read index received'  (duration: 256.496184ms)","trace[1692051660] 'applied index is now lower than readState.Index'  (duration: 181.942µs)"],"step_count":2}
	{"level":"info","ts":"2024-05-05T21:02:01.148132Z","caller":"traceutil/trace.go:171","msg":"trace[1637680112] transaction","detail":"{read_only:false; response_revision:1361; number_of_response:1; }","duration":"276.59198ms","start":"2024-05-05T21:02:00.871531Z","end":"2024-05-05T21:02:01.148123Z","steps":["trace[1637680112] 'process raft request'  (duration: 276.325721ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-05T21:02:01.148409Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"257.078282ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2024-05-05T21:02:01.14847Z","caller":"traceutil/trace.go:171","msg":"trace[1958757565] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1361; }","duration":"257.175848ms","start":"2024-05-05T21:02:00.891285Z","end":"2024-05-05T21:02:01.148461Z","steps":["trace[1958757565] 'agreement among raft nodes before linearized reading'  (duration: 257.042271ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-05T21:02:01.148852Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"191.132953ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85982"}
	{"level":"info","ts":"2024-05-05T21:02:01.148937Z","caller":"traceutil/trace.go:171","msg":"trace[1286363820] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1361; }","duration":"191.244517ms","start":"2024-05-05T21:02:00.957685Z","end":"2024-05-05T21:02:01.14893Z","steps":["trace[1286363820] 'agreement among raft nodes before linearized reading'  (duration: 190.92352ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-05T21:02:17.953831Z","caller":"traceutil/trace.go:171","msg":"trace[458017061] linearizableReadLoop","detail":"{readStateIndex:1579; appliedIndex:1578; }","duration":"114.854288ms","start":"2024-05-05T21:02:17.83894Z","end":"2024-05-05T21:02:17.953794Z","steps":["trace[458017061] 'read index received'  (duration: 114.471128ms)","trace[458017061] 'applied index is now lower than readState.Index'  (duration: 382.668µs)"],"step_count":2}
	{"level":"warn","ts":"2024-05-05T21:02:17.954061Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.091875ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:554"}
	{"level":"info","ts":"2024-05-05T21:02:17.954094Z","caller":"traceutil/trace.go:171","msg":"trace[335140393] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1518; }","duration":"115.170965ms","start":"2024-05-05T21:02:17.838915Z","end":"2024-05-05T21:02:17.954086Z","steps":["trace[335140393] 'agreement among raft nodes before linearized reading'  (duration: 115.027794ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-05T21:02:17.954301Z","caller":"traceutil/trace.go:171","msg":"trace[2052233926] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1518; }","duration":"387.059018ms","start":"2024-05-05T21:02:17.567236Z","end":"2024-05-05T21:02:17.954295Z","steps":["trace[2052233926] 'process raft request'  (duration: 386.272916ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-05T21:02:17.954496Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-05T21:02:17.567225Z","time spent":"387.105791ms","remote":"127.0.0.1:45808","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":51,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/deployments/kube-system/tiller-deploy\" mod_revision:945 > success:<request_delete_range:<key:\"/registry/deployments/kube-system/tiller-deploy\" > > failure:<request_range:<key:\"/registry/deployments/kube-system/tiller-deploy\" > >"}
	{"level":"warn","ts":"2024-05-05T21:02:22.088101Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"178.963489ms","expected-duration":"100ms","prefix":"","request":"header:<ID:12753788745123965929 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/gadget/gadget-9p6xb.17ccb36e996f8bf0\" mod_revision:1229 > success:<request_delete_range:<key:\"/registry/events/gadget/gadget-9p6xb.17ccb36e996f8bf0\" > > failure:<request_range:<key:\"/registry/events/gadget/gadget-9p6xb.17ccb36e996f8bf0\" > >>","response":"size:18"}
	{"level":"info","ts":"2024-05-05T21:02:22.088202Z","caller":"traceutil/trace.go:171","msg":"trace[1301753871] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1548; }","duration":"292.151907ms","start":"2024-05-05T21:02:21.796039Z","end":"2024-05-05T21:02:22.088191Z","steps":["trace[1301753871] 'process raft request'  (duration: 112.788554ms)","trace[1301753871] 'compare'  (duration: 178.891891ms)"],"step_count":2}
	
	
	==> gcp-auth [7444e66e637089e620537867e5aee0a435c4cddc878105392e4a9b23574d252a] <==
	2024/05/05 21:01:51 GCP Auth Webhook started!
	2024/05/05 21:01:52 Ready to marshal response ...
	2024/05/05 21:01:52 Ready to write response ...
	2024/05/05 21:01:52 Ready to marshal response ...
	2024/05/05 21:01:52 Ready to write response ...
	2024/05/05 21:01:53 Ready to marshal response ...
	2024/05/05 21:01:53 Ready to write response ...
	2024/05/05 21:01:53 Ready to marshal response ...
	2024/05/05 21:01:53 Ready to write response ...
	2024/05/05 21:01:53 Ready to marshal response ...
	2024/05/05 21:01:53 Ready to write response ...
	2024/05/05 21:02:04 Ready to marshal response ...
	2024/05/05 21:02:04 Ready to write response ...
	2024/05/05 21:02:07 Ready to marshal response ...
	2024/05/05 21:02:07 Ready to write response ...
	2024/05/05 21:02:10 Ready to marshal response ...
	2024/05/05 21:02:10 Ready to write response ...
	2024/05/05 21:02:10 Ready to marshal response ...
	2024/05/05 21:02:10 Ready to write response ...
	2024/05/05 21:02:33 Ready to marshal response ...
	2024/05/05 21:02:33 Ready to write response ...
	2024/05/05 21:02:37 Ready to marshal response ...
	2024/05/05 21:02:37 Ready to write response ...
	2024/05/05 21:04:54 Ready to marshal response ...
	2024/05/05 21:04:54 Ready to write response ...
	
	
	==> kernel <==
	 21:08:02 up 9 min,  0 users,  load average: 0.15, 0.78, 0.66
	Linux addons-476078 5.10.207 #1 SMP Tue Apr 30 22:38:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [7c82f83da0a7026479dc4c72e0319ebbcfc42e61eee427ce2b0ac46997d8e972] <==
	E0505 21:01:03.910559       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.223.33:443/apis/metrics.k8s.io/v1beta1: Get "https://10.110.223.33:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.110.223.33:443: connect: connection refused
	E0505 21:01:03.917252       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.223.33:443/apis/metrics.k8s.io/v1beta1: Get "https://10.110.223.33:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.110.223.33:443: connect: connection refused
	I0505 21:01:03.988046       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0505 21:01:53.105951       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.57.201"}
	I0505 21:02:16.737555       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0505 21:02:17.798056       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0505 21:02:23.821365       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0505 21:02:24.123442       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"gadget\" not found]"
	I0505 21:02:29.603857       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0505 21:02:33.567433       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0505 21:02:33.748335       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.222.71"}
	I0505 21:02:56.504439       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0505 21:02:56.504533       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0505 21:02:56.537119       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0505 21:02:56.537161       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0505 21:02:56.540155       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0505 21:02:56.540219       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0505 21:02:56.581424       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0505 21:02:56.581500       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0505 21:02:56.582920       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0505 21:02:56.583588       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0505 21:02:57.541389       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0505 21:02:57.584110       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0505 21:02:57.617143       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0505 21:04:54.769080       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.105.140.40"}
	
	
	==> kube-controller-manager [4465f04bcc1d8801e9187dca5b031fb1551b06f41604b619d0436433e2a65777] <==
	W0505 21:05:29.205904       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0505 21:05:29.206037       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0505 21:05:47.291029       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0505 21:05:47.291086       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0505 21:05:53.710570       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0505 21:05:53.710823       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0505 21:05:57.344230       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0505 21:05:57.344294       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0505 21:06:22.738597       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0505 21:06:22.738862       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0505 21:06:34.579961       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0505 21:06:34.580087       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0505 21:06:40.931833       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0505 21:06:40.931953       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0505 21:06:42.431336       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0505 21:06:42.431438       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0505 21:07:20.980414       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0505 21:07:20.980914       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0505 21:07:26.995196       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0505 21:07:26.995253       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0505 21:07:34.441541       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0505 21:07:34.441705       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0505 21:07:42.351519       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0505 21:07:42.351767       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0505 21:08:00.415018       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="16.615µs"
	
	
	==> kube-proxy [bbf845eacbdf1b723d1056d0e6ebbb49119b17090e3408dcf986522b20f20993] <==
	I0505 20:59:19.919104       1 server_linux.go:69] "Using iptables proxy"
	I0505 20:59:19.937571       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.102"]
	I0505 20:59:20.163671       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0505 20:59:20.163712       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0505 20:59:20.163728       1 server_linux.go:165] "Using iptables Proxier"
	I0505 20:59:20.177250       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0505 20:59:20.177412       1 server.go:872] "Version info" version="v1.30.0"
	I0505 20:59:20.177428       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0505 20:59:20.178829       1 config.go:192] "Starting service config controller"
	I0505 20:59:20.178839       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0505 20:59:20.178864       1 config.go:101] "Starting endpoint slice config controller"
	I0505 20:59:20.178869       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0505 20:59:20.179253       1 config.go:319] "Starting node config controller"
	I0505 20:59:20.179294       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0505 20:59:20.279884       1 shared_informer.go:320] Caches are synced for node config
	I0505 20:59:20.279914       1 shared_informer.go:320] Caches are synced for service config
	I0505 20:59:20.279937       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [5647ed381b79011c0479628e434b280a42eaab6e7fa1c98f85ec48620a5f94f9] <==
	W0505 20:59:00.027827       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0505 20:59:00.030320       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0505 20:59:00.027869       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0505 20:59:00.027905       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0505 20:59:00.032820       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0505 20:59:00.032888       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0505 20:59:00.913584       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0505 20:59:00.913708       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0505 20:59:00.963112       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0505 20:59:00.963191       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0505 20:59:01.129163       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0505 20:59:01.129219       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0505 20:59:01.133067       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0505 20:59:01.133129       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0505 20:59:01.170428       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0505 20:59:01.170484       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0505 20:59:01.226127       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0505 20:59:01.226182       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0505 20:59:01.237895       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0505 20:59:01.238008       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0505 20:59:01.289050       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0505 20:59:01.289126       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0505 20:59:01.456089       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0505 20:59:01.456147       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0505 20:59:04.085169       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 05 21:05:01 addons-476078 kubelet[1267]: I0505 21:05:01.562532    1267 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de600e3b1de4b6e650877845aebd0ef6ce237fb4e2060c57061222341e386816"} err="failed to get container status \"de600e3b1de4b6e650877845aebd0ef6ce237fb4e2060c57061222341e386816\": rpc error: code = NotFound desc = could not find container \"de600e3b1de4b6e650877845aebd0ef6ce237fb4e2060c57061222341e386816\": container with ID starting with de600e3b1de4b6e650877845aebd0ef6ce237fb4e2060c57061222341e386816 not found: ID does not exist"
	May 05 21:05:03 addons-476078 kubelet[1267]: E0505 21:05:03.146518    1267 iptables.go:577] "Could not set up iptables canary" err=<
	May 05 21:05:03 addons-476078 kubelet[1267]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 05 21:05:03 addons-476078 kubelet[1267]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 05 21:05:03 addons-476078 kubelet[1267]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 05 21:05:03 addons-476078 kubelet[1267]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 05 21:05:04 addons-476078 kubelet[1267]: I0505 21:05:04.508817    1267 scope.go:117] "RemoveContainer" containerID="fd1c1a2f4f0bce290536c8709310300492909fa1d1d05e8a1c2770c8e382966e"
	May 05 21:05:04 addons-476078 kubelet[1267]: I0505 21:05:04.533730    1267 scope.go:117] "RemoveContainer" containerID="ba07ef0aee9a097f68533b37e783be89e0fbd2865d0a8be0eea00000654665a1"
	May 05 21:06:03 addons-476078 kubelet[1267]: E0505 21:06:03.148878    1267 iptables.go:577] "Could not set up iptables canary" err=<
	May 05 21:06:03 addons-476078 kubelet[1267]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 05 21:06:03 addons-476078 kubelet[1267]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 05 21:06:03 addons-476078 kubelet[1267]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 05 21:06:03 addons-476078 kubelet[1267]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 05 21:07:03 addons-476078 kubelet[1267]: E0505 21:07:03.153223    1267 iptables.go:577] "Could not set up iptables canary" err=<
	May 05 21:07:03 addons-476078 kubelet[1267]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 05 21:07:03 addons-476078 kubelet[1267]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 05 21:07:03 addons-476078 kubelet[1267]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 05 21:07:03 addons-476078 kubelet[1267]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 05 21:08:00 addons-476078 kubelet[1267]: I0505 21:08:00.436777    1267 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-86c47465fc-28xbq" podStartSLOduration=182.958461592 podStartE2EDuration="3m6.436733519s" podCreationTimestamp="2024-05-05 21:04:54 +0000 UTC" firstStartedPulling="2024-05-05 21:04:55.252839793 +0000 UTC m=+352.329222242" lastFinishedPulling="2024-05-05 21:04:58.731111717 +0000 UTC m=+355.807494169" observedRunningTime="2024-05-05 21:04:59.543130676 +0000 UTC m=+356.619513138" watchObservedRunningTime="2024-05-05 21:08:00.436733519 +0000 UTC m=+537.513115986"
	May 05 21:08:01 addons-476078 kubelet[1267]: I0505 21:08:01.939434    1267 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfj6v\" (UniqueName: \"kubernetes.io/projected/8b3d4733-9d64-4587-9ed8-b33c78c6ccf0-kube-api-access-hfj6v\") pod \"8b3d4733-9d64-4587-9ed8-b33c78c6ccf0\" (UID: \"8b3d4733-9d64-4587-9ed8-b33c78c6ccf0\") "
	May 05 21:08:01 addons-476078 kubelet[1267]: I0505 21:08:01.939514    1267 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/8b3d4733-9d64-4587-9ed8-b33c78c6ccf0-tmp-dir\") pod \"8b3d4733-9d64-4587-9ed8-b33c78c6ccf0\" (UID: \"8b3d4733-9d64-4587-9ed8-b33c78c6ccf0\") "
	May 05 21:08:01 addons-476078 kubelet[1267]: I0505 21:08:01.940092    1267 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b3d4733-9d64-4587-9ed8-b33c78c6ccf0-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "8b3d4733-9d64-4587-9ed8-b33c78c6ccf0" (UID: "8b3d4733-9d64-4587-9ed8-b33c78c6ccf0"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	May 05 21:08:01 addons-476078 kubelet[1267]: I0505 21:08:01.944892    1267 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b3d4733-9d64-4587-9ed8-b33c78c6ccf0-kube-api-access-hfj6v" (OuterVolumeSpecName: "kube-api-access-hfj6v") pod "8b3d4733-9d64-4587-9ed8-b33c78c6ccf0" (UID: "8b3d4733-9d64-4587-9ed8-b33c78c6ccf0"). InnerVolumeSpecName "kube-api-access-hfj6v". PluginName "kubernetes.io/projected", VolumeGidValue ""
	May 05 21:08:02 addons-476078 kubelet[1267]: I0505 21:08:02.040855    1267 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-hfj6v\" (UniqueName: \"kubernetes.io/projected/8b3d4733-9d64-4587-9ed8-b33c78c6ccf0-kube-api-access-hfj6v\") on node \"addons-476078\" DevicePath \"\""
	May 05 21:08:02 addons-476078 kubelet[1267]: I0505 21:08:02.040885    1267 reconciler_common.go:289] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/8b3d4733-9d64-4587-9ed8-b33c78c6ccf0-tmp-dir\") on node \"addons-476078\" DevicePath \"\""
	
	
	==> storage-provisioner [dd225ed77802f8083a3f863e503a7d3e8feb5443321fc208a3b8a0addb06f9a4] <==
	I0505 20:59:29.315804       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0505 20:59:29.344539       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0505 20:59:29.344704       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0505 20:59:29.361144       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0505 20:59:29.362948       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-476078_4e1d4da4-531d-4e79-a12c-3ea4818c1ceb!
	I0505 20:59:29.372099       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"420b0165-510a-4f68-93a0-80a2e3d822fd", APIVersion:"v1", ResourceVersion:"798", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-476078_4e1d4da4-531d-4e79-a12c-3ea4818c1ceb became leader
	I0505 20:59:29.464490       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-476078_4e1d4da4-531d-4e79-a12c-3ea4818c1ceb!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-476078 -n addons-476078
helpers_test.go:261: (dbg) Run:  kubectl --context addons-476078 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (344.81s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.32s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-476078
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-476078: exit status 82 (2m0.502639324s)

                                                
                                                
-- stdout --
	* Stopping node "addons-476078"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-476078" : exit status 82
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-476078
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-476078: exit status 11 (21.532190745s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.102:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-476078" : exit status 11
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-476078
addons_test.go:182: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-476078: exit status 11 (6.139748745s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.102:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:184: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-476078" : exit status 11
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-476078
addons_test.go:187: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-476078: exit status 11 (6.145632672s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.102:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:189: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-476078" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (142.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 node stop m02 -v=7 --alsologtostderr
E0505 21:21:51.948060   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/client.crt: no such file or directory
E0505 21:22:15.673016   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/functional-273789/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-322980 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.499173913s)

                                                
                                                
-- stdout --
	* Stopping node "ha-322980-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 21:21:28.338870   34150 out.go:291] Setting OutFile to fd 1 ...
	I0505 21:21:28.339007   34150 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 21:21:28.339017   34150 out.go:304] Setting ErrFile to fd 2...
	I0505 21:21:28.339021   34150 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 21:21:28.339215   34150 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18602-11466/.minikube/bin
	I0505 21:21:28.339434   34150 mustload.go:65] Loading cluster: ha-322980
	I0505 21:21:28.339823   34150 config.go:182] Loaded profile config "ha-322980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 21:21:28.339839   34150 stop.go:39] StopHost: ha-322980-m02
	I0505 21:21:28.340197   34150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:21:28.340245   34150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:21:28.357509   34150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34579
	I0505 21:21:28.357906   34150 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:21:28.358438   34150 main.go:141] libmachine: Using API Version  1
	I0505 21:21:28.358465   34150 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:21:28.358839   34150 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:21:28.361383   34150 out.go:177] * Stopping node "ha-322980-m02"  ...
	I0505 21:21:28.362634   34150 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0505 21:21:28.362683   34150 main.go:141] libmachine: (ha-322980-m02) Calling .DriverName
	I0505 21:21:28.362904   34150 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0505 21:21:28.362940   34150 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHHostname
	I0505 21:21:28.365827   34150 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:21:28.366232   34150 main.go:141] libmachine: (ha-322980-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:59:b4", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:16:42 +0000 UTC Type:0 Mac:52:54:00:91:59:b4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-322980-m02 Clientid:01:52:54:00:91:59:b4}
	I0505 21:21:28.366267   34150 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:21:28.366396   34150 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHPort
	I0505 21:21:28.366568   34150 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHKeyPath
	I0505 21:21:28.366768   34150 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHUsername
	I0505 21:21:28.366938   34150 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m02/id_rsa Username:docker}
	I0505 21:21:28.457494   34150 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0505 21:21:28.514981   34150 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0505 21:21:28.577065   34150 main.go:141] libmachine: Stopping "ha-322980-m02"...
	I0505 21:21:28.577124   34150 main.go:141] libmachine: (ha-322980-m02) Calling .GetState
	I0505 21:21:28.578712   34150 main.go:141] libmachine: (ha-322980-m02) Calling .Stop
	I0505 21:21:28.581881   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 0/120
	I0505 21:21:29.582965   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 1/120
	I0505 21:21:30.584238   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 2/120
	I0505 21:21:31.586098   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 3/120
	I0505 21:21:32.587638   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 4/120
	I0505 21:21:33.589547   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 5/120
	I0505 21:21:34.591014   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 6/120
	I0505 21:21:35.592535   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 7/120
	I0505 21:21:36.594701   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 8/120
	I0505 21:21:37.596133   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 9/120
	I0505 21:21:38.598220   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 10/120
	I0505 21:21:39.599573   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 11/120
	I0505 21:21:40.601113   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 12/120
	I0505 21:21:41.602852   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 13/120
	I0505 21:21:42.604210   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 14/120
	I0505 21:21:43.605921   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 15/120
	I0505 21:21:44.607439   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 16/120
	I0505 21:21:45.608753   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 17/120
	I0505 21:21:46.610231   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 18/120
	I0505 21:21:47.611648   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 19/120
	I0505 21:21:48.614120   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 20/120
	I0505 21:21:49.616194   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 21/120
	I0505 21:21:50.617982   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 22/120
	I0505 21:21:51.619213   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 23/120
	I0505 21:21:52.620700   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 24/120
	I0505 21:21:53.622739   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 25/120
	I0505 21:21:54.624177   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 26/120
	I0505 21:21:55.625567   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 27/120
	I0505 21:21:56.627104   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 28/120
	I0505 21:21:57.628553   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 29/120
	I0505 21:21:58.630949   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 30/120
	I0505 21:21:59.632282   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 31/120
	I0505 21:22:00.633924   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 32/120
	I0505 21:22:01.635518   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 33/120
	I0505 21:22:02.636920   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 34/120
	I0505 21:22:03.638713   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 35/120
	I0505 21:22:04.640298   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 36/120
	I0505 21:22:05.642709   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 37/120
	I0505 21:22:06.644714   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 38/120
	I0505 21:22:07.646240   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 39/120
	I0505 21:22:08.648911   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 40/120
	I0505 21:22:09.650476   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 41/120
	I0505 21:22:10.652247   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 42/120
	I0505 21:22:11.653940   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 43/120
	I0505 21:22:12.655584   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 44/120
	I0505 21:22:13.657729   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 45/120
	I0505 21:22:14.659315   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 46/120
	I0505 21:22:15.660667   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 47/120
	I0505 21:22:16.662064   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 48/120
	I0505 21:22:17.663737   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 49/120
	I0505 21:22:18.665303   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 50/120
	I0505 21:22:19.667240   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 51/120
	I0505 21:22:20.669170   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 52/120
	I0505 21:22:21.670656   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 53/120
	I0505 21:22:22.672165   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 54/120
	I0505 21:22:23.674059   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 55/120
	I0505 21:22:24.676318   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 56/120
	I0505 21:22:25.677967   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 57/120
	I0505 21:22:26.679387   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 58/120
	I0505 21:22:27.680787   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 59/120
	I0505 21:22:28.683041   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 60/120
	I0505 21:22:29.684475   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 61/120
	I0505 21:22:30.685912   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 62/120
	I0505 21:22:31.687594   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 63/120
	I0505 21:22:32.688859   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 64/120
	I0505 21:22:33.690527   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 65/120
	I0505 21:22:34.692005   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 66/120
	I0505 21:22:35.693507   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 67/120
	I0505 21:22:36.694920   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 68/120
	I0505 21:22:37.696306   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 69/120
	I0505 21:22:38.698003   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 70/120
	I0505 21:22:39.699398   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 71/120
	I0505 21:22:40.700927   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 72/120
	I0505 21:22:41.702251   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 73/120
	I0505 21:22:42.704138   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 74/120
	I0505 21:22:43.706254   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 75/120
	I0505 21:22:44.707628   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 76/120
	I0505 21:22:45.708851   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 77/120
	I0505 21:22:46.710415   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 78/120
	I0505 21:22:47.711882   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 79/120
	I0505 21:22:48.713826   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 80/120
	I0505 21:22:49.715266   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 81/120
	I0505 21:22:50.716916   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 82/120
	I0505 21:22:51.718585   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 83/120
	I0505 21:22:52.720204   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 84/120
	I0505 21:22:53.721986   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 85/120
	I0505 21:22:54.723430   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 86/120
	I0505 21:22:55.724969   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 87/120
	I0505 21:22:56.726919   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 88/120
	I0505 21:22:57.728412   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 89/120
	I0505 21:22:58.730001   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 90/120
	I0505 21:22:59.731644   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 91/120
	I0505 21:23:00.734160   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 92/120
	I0505 21:23:01.735877   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 93/120
	I0505 21:23:02.737443   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 94/120
	I0505 21:23:03.738988   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 95/120
	I0505 21:23:04.741209   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 96/120
	I0505 21:23:05.743206   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 97/120
	I0505 21:23:06.744414   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 98/120
	I0505 21:23:07.746024   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 99/120
	I0505 21:23:08.748178   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 100/120
	I0505 21:23:09.749893   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 101/120
	I0505 21:23:10.752108   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 102/120
	I0505 21:23:11.753880   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 103/120
	I0505 21:23:12.755616   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 104/120
	I0505 21:23:13.757421   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 105/120
	I0505 21:23:14.759390   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 106/120
	I0505 21:23:15.760646   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 107/120
	I0505 21:23:16.761940   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 108/120
	I0505 21:23:17.763374   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 109/120
	I0505 21:23:18.765100   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 110/120
	I0505 21:23:19.766718   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 111/120
	I0505 21:23:20.767985   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 112/120
	I0505 21:23:21.770164   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 113/120
	I0505 21:23:22.771639   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 114/120
	I0505 21:23:23.773320   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 115/120
	I0505 21:23:24.775373   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 116/120
	I0505 21:23:25.776647   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 117/120
	I0505 21:23:26.778105   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 118/120
	I0505 21:23:27.779431   34150 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 119/120
	I0505 21:23:28.780513   34150 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0505 21:23:28.780647   34150 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-322980 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-322980 status -v=7 --alsologtostderr: exit status 3 (19.268653195s)

                                                
                                                
-- stdout --
	ha-322980
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-322980-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-322980-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-322980-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 21:23:28.840767   34582 out.go:291] Setting OutFile to fd 1 ...
	I0505 21:23:28.840909   34582 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 21:23:28.840920   34582 out.go:304] Setting ErrFile to fd 2...
	I0505 21:23:28.840927   34582 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 21:23:28.841141   34582 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18602-11466/.minikube/bin
	I0505 21:23:28.841317   34582 out.go:298] Setting JSON to false
	I0505 21:23:28.841347   34582 mustload.go:65] Loading cluster: ha-322980
	I0505 21:23:28.841400   34582 notify.go:220] Checking for updates...
	I0505 21:23:28.841741   34582 config.go:182] Loaded profile config "ha-322980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 21:23:28.841758   34582 status.go:255] checking status of ha-322980 ...
	I0505 21:23:28.842158   34582 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:23:28.842238   34582 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:23:28.860749   34582 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44991
	I0505 21:23:28.861116   34582 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:23:28.861804   34582 main.go:141] libmachine: Using API Version  1
	I0505 21:23:28.861837   34582 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:23:28.862166   34582 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:23:28.862383   34582 main.go:141] libmachine: (ha-322980) Calling .GetState
	I0505 21:23:28.863901   34582 status.go:330] ha-322980 host status = "Running" (err=<nil>)
	I0505 21:23:28.863920   34582 host.go:66] Checking if "ha-322980" exists ...
	I0505 21:23:28.864186   34582 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:23:28.864223   34582 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:23:28.879168   34582 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36433
	I0505 21:23:28.879625   34582 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:23:28.880198   34582 main.go:141] libmachine: Using API Version  1
	I0505 21:23:28.880222   34582 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:23:28.880571   34582 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:23:28.880780   34582 main.go:141] libmachine: (ha-322980) Calling .GetIP
	I0505 21:23:28.883862   34582 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:23:28.884343   34582 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:23:28.884368   34582 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:23:28.884518   34582 host.go:66] Checking if "ha-322980" exists ...
	I0505 21:23:28.884819   34582 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:23:28.884857   34582 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:23:28.899253   34582 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41717
	I0505 21:23:28.899699   34582 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:23:28.900175   34582 main.go:141] libmachine: Using API Version  1
	I0505 21:23:28.900195   34582 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:23:28.900603   34582 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:23:28.900818   34582 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:23:28.901014   34582 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0505 21:23:28.901041   34582 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:23:28.903791   34582 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:23:28.904201   34582 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:23:28.904230   34582 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:23:28.904362   34582 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:23:28.904514   34582 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:23:28.904682   34582 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:23:28.904804   34582 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa Username:docker}
	I0505 21:23:28.994815   34582 ssh_runner.go:195] Run: systemctl --version
	I0505 21:23:29.003702   34582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 21:23:29.024988   34582 kubeconfig.go:125] found "ha-322980" server: "https://192.168.39.254:8443"
	I0505 21:23:29.025025   34582 api_server.go:166] Checking apiserver status ...
	I0505 21:23:29.025066   34582 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 21:23:29.044206   34582 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1206/cgroup
	W0505 21:23:29.056295   34582 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1206/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0505 21:23:29.056359   34582 ssh_runner.go:195] Run: ls
	I0505 21:23:29.061640   34582 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0505 21:23:29.066509   34582 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0505 21:23:29.066535   34582 status.go:422] ha-322980 apiserver status = Running (err=<nil>)
	I0505 21:23:29.066548   34582 status.go:257] ha-322980 status: &{Name:ha-322980 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0505 21:23:29.066567   34582 status.go:255] checking status of ha-322980-m02 ...
	I0505 21:23:29.066858   34582 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:23:29.066890   34582 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:23:29.082314   34582 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37263
	I0505 21:23:29.082787   34582 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:23:29.083221   34582 main.go:141] libmachine: Using API Version  1
	I0505 21:23:29.083241   34582 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:23:29.083540   34582 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:23:29.083763   34582 main.go:141] libmachine: (ha-322980-m02) Calling .GetState
	I0505 21:23:29.085219   34582 status.go:330] ha-322980-m02 host status = "Running" (err=<nil>)
	I0505 21:23:29.085236   34582 host.go:66] Checking if "ha-322980-m02" exists ...
	I0505 21:23:29.085626   34582 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:23:29.085680   34582 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:23:29.100285   34582 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42509
	I0505 21:23:29.100672   34582 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:23:29.101132   34582 main.go:141] libmachine: Using API Version  1
	I0505 21:23:29.101152   34582 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:23:29.101487   34582 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:23:29.101667   34582 main.go:141] libmachine: (ha-322980-m02) Calling .GetIP
	I0505 21:23:29.104387   34582 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:23:29.104957   34582 main.go:141] libmachine: (ha-322980-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:59:b4", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:16:42 +0000 UTC Type:0 Mac:52:54:00:91:59:b4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-322980-m02 Clientid:01:52:54:00:91:59:b4}
	I0505 21:23:29.104981   34582 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:23:29.105103   34582 host.go:66] Checking if "ha-322980-m02" exists ...
	I0505 21:23:29.105403   34582 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:23:29.105436   34582 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:23:29.120880   34582 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33349
	I0505 21:23:29.121373   34582 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:23:29.121896   34582 main.go:141] libmachine: Using API Version  1
	I0505 21:23:29.121914   34582 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:23:29.122227   34582 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:23:29.122419   34582 main.go:141] libmachine: (ha-322980-m02) Calling .DriverName
	I0505 21:23:29.122615   34582 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0505 21:23:29.122635   34582 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHHostname
	I0505 21:23:29.125353   34582 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:23:29.125811   34582 main.go:141] libmachine: (ha-322980-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:59:b4", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:16:42 +0000 UTC Type:0 Mac:52:54:00:91:59:b4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-322980-m02 Clientid:01:52:54:00:91:59:b4}
	I0505 21:23:29.125843   34582 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:23:29.125990   34582 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHPort
	I0505 21:23:29.126197   34582 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHKeyPath
	I0505 21:23:29.126339   34582 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHUsername
	I0505 21:23:29.126469   34582 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m02/id_rsa Username:docker}
	W0505 21:23:47.675703   34582 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.228:22: connect: no route to host
	W0505 21:23:47.675779   34582 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.228:22: connect: no route to host
	E0505 21:23:47.675800   34582 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.228:22: connect: no route to host
	I0505 21:23:47.675813   34582 status.go:257] ha-322980-m02 status: &{Name:ha-322980-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0505 21:23:47.675846   34582 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.228:22: connect: no route to host
	I0505 21:23:47.675854   34582 status.go:255] checking status of ha-322980-m03 ...
	I0505 21:23:47.676236   34582 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:23:47.676290   34582 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:23:47.691864   34582 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45877
	I0505 21:23:47.692283   34582 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:23:47.692790   34582 main.go:141] libmachine: Using API Version  1
	I0505 21:23:47.692810   34582 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:23:47.693176   34582 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:23:47.693363   34582 main.go:141] libmachine: (ha-322980-m03) Calling .GetState
	I0505 21:23:47.694798   34582 status.go:330] ha-322980-m03 host status = "Running" (err=<nil>)
	I0505 21:23:47.694814   34582 host.go:66] Checking if "ha-322980-m03" exists ...
	I0505 21:23:47.695119   34582 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:23:47.695174   34582 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:23:47.709811   34582 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41505
	I0505 21:23:47.710187   34582 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:23:47.710611   34582 main.go:141] libmachine: Using API Version  1
	I0505 21:23:47.710625   34582 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:23:47.710916   34582 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:23:47.711080   34582 main.go:141] libmachine: (ha-322980-m03) Calling .GetIP
	I0505 21:23:47.713788   34582 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:23:47.714132   34582 main.go:141] libmachine: (ha-322980-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:64:b7", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:18:47 +0000 UTC Type:0 Mac:52:54:00:c6:64:b7 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ha-322980-m03 Clientid:01:52:54:00:c6:64:b7}
	I0505 21:23:47.714172   34582 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined IP address 192.168.39.29 and MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:23:47.714342   34582 host.go:66] Checking if "ha-322980-m03" exists ...
	I0505 21:23:47.714684   34582 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:23:47.714722   34582 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:23:47.729292   34582 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32843
	I0505 21:23:47.729678   34582 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:23:47.730123   34582 main.go:141] libmachine: Using API Version  1
	I0505 21:23:47.730145   34582 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:23:47.730455   34582 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:23:47.730600   34582 main.go:141] libmachine: (ha-322980-m03) Calling .DriverName
	I0505 21:23:47.730800   34582 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0505 21:23:47.730823   34582 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHHostname
	I0505 21:23:47.733319   34582 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:23:47.733729   34582 main.go:141] libmachine: (ha-322980-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:64:b7", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:18:47 +0000 UTC Type:0 Mac:52:54:00:c6:64:b7 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ha-322980-m03 Clientid:01:52:54:00:c6:64:b7}
	I0505 21:23:47.733762   34582 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined IP address 192.168.39.29 and MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:23:47.733911   34582 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHPort
	I0505 21:23:47.734072   34582 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHKeyPath
	I0505 21:23:47.734194   34582 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHUsername
	I0505 21:23:47.734316   34582 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m03/id_rsa Username:docker}
	I0505 21:23:47.821839   34582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 21:23:47.841066   34582 kubeconfig.go:125] found "ha-322980" server: "https://192.168.39.254:8443"
	I0505 21:23:47.841101   34582 api_server.go:166] Checking apiserver status ...
	I0505 21:23:47.841143   34582 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 21:23:47.858289   34582 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1549/cgroup
	W0505 21:23:47.868963   34582 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1549/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0505 21:23:47.869009   34582 ssh_runner.go:195] Run: ls
	I0505 21:23:47.874742   34582 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0505 21:23:47.880973   34582 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0505 21:23:47.880994   34582 status.go:422] ha-322980-m03 apiserver status = Running (err=<nil>)
	I0505 21:23:47.881002   34582 status.go:257] ha-322980-m03 status: &{Name:ha-322980-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0505 21:23:47.881015   34582 status.go:255] checking status of ha-322980-m04 ...
	I0505 21:23:47.881352   34582 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:23:47.881404   34582 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:23:47.896683   34582 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43243
	I0505 21:23:47.897049   34582 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:23:47.897507   34582 main.go:141] libmachine: Using API Version  1
	I0505 21:23:47.897527   34582 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:23:47.897835   34582 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:23:47.898032   34582 main.go:141] libmachine: (ha-322980-m04) Calling .GetState
	I0505 21:23:47.899367   34582 status.go:330] ha-322980-m04 host status = "Running" (err=<nil>)
	I0505 21:23:47.899382   34582 host.go:66] Checking if "ha-322980-m04" exists ...
	I0505 21:23:47.899710   34582 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:23:47.899746   34582 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:23:47.914002   34582 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36605
	I0505 21:23:47.914390   34582 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:23:47.914836   34582 main.go:141] libmachine: Using API Version  1
	I0505 21:23:47.914859   34582 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:23:47.915213   34582 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:23:47.915406   34582 main.go:141] libmachine: (ha-322980-m04) Calling .GetIP
	I0505 21:23:47.918155   34582 main.go:141] libmachine: (ha-322980-m04) DBG | domain ha-322980-m04 has defined MAC address 52:54:00:dd:17:2b in network mk-ha-322980
	I0505 21:23:47.918639   34582 main.go:141] libmachine: (ha-322980-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:17:2b", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:20:13 +0000 UTC Type:0 Mac:52:54:00:dd:17:2b Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-322980-m04 Clientid:01:52:54:00:dd:17:2b}
	I0505 21:23:47.918663   34582 main.go:141] libmachine: (ha-322980-m04) DBG | domain ha-322980-m04 has defined IP address 192.168.39.169 and MAC address 52:54:00:dd:17:2b in network mk-ha-322980
	I0505 21:23:47.918796   34582 host.go:66] Checking if "ha-322980-m04" exists ...
	I0505 21:23:47.919074   34582 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:23:47.919112   34582 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:23:47.934308   34582 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37409
	I0505 21:23:47.934766   34582 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:23:47.935317   34582 main.go:141] libmachine: Using API Version  1
	I0505 21:23:47.935337   34582 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:23:47.935649   34582 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:23:47.935838   34582 main.go:141] libmachine: (ha-322980-m04) Calling .DriverName
	I0505 21:23:47.936000   34582 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0505 21:23:47.936026   34582 main.go:141] libmachine: (ha-322980-m04) Calling .GetSSHHostname
	I0505 21:23:47.938884   34582 main.go:141] libmachine: (ha-322980-m04) DBG | domain ha-322980-m04 has defined MAC address 52:54:00:dd:17:2b in network mk-ha-322980
	I0505 21:23:47.939385   34582 main.go:141] libmachine: (ha-322980-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:17:2b", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:20:13 +0000 UTC Type:0 Mac:52:54:00:dd:17:2b Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-322980-m04 Clientid:01:52:54:00:dd:17:2b}
	I0505 21:23:47.939404   34582 main.go:141] libmachine: (ha-322980-m04) DBG | domain ha-322980-m04 has defined IP address 192.168.39.169 and MAC address 52:54:00:dd:17:2b in network mk-ha-322980
	I0505 21:23:47.939586   34582 main.go:141] libmachine: (ha-322980-m04) Calling .GetSSHPort
	I0505 21:23:47.939746   34582 main.go:141] libmachine: (ha-322980-m04) Calling .GetSSHKeyPath
	I0505 21:23:47.939890   34582 main.go:141] libmachine: (ha-322980-m04) Calling .GetSSHUsername
	I0505 21:23:47.940005   34582 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m04/id_rsa Username:docker}
	I0505 21:23:48.029682   34582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 21:23:48.048301   34582 status.go:257] ha-322980-m04 status: &{Name:ha-322980-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-322980 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-322980 -n ha-322980
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-322980 logs -n 25: (1.638685315s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-322980 cp ha-322980-m03:/home/docker/cp-test.txt                              | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3379972730/001/cp-test_ha-322980-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n                                                                 | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-322980 cp ha-322980-m03:/home/docker/cp-test.txt                              | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980:/home/docker/cp-test_ha-322980-m03_ha-322980.txt                       |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n                                                                 | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n ha-322980 sudo cat                                              | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | /home/docker/cp-test_ha-322980-m03_ha-322980.txt                                 |           |         |         |                     |                     |
	| cp      | ha-322980 cp ha-322980-m03:/home/docker/cp-test.txt                              | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m02:/home/docker/cp-test_ha-322980-m03_ha-322980-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n                                                                 | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n ha-322980-m02 sudo cat                                          | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | /home/docker/cp-test_ha-322980-m03_ha-322980-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-322980 cp ha-322980-m03:/home/docker/cp-test.txt                              | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m04:/home/docker/cp-test_ha-322980-m03_ha-322980-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n                                                                 | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n ha-322980-m04 sudo cat                                          | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | /home/docker/cp-test_ha-322980-m03_ha-322980-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-322980 cp testdata/cp-test.txt                                                | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n                                                                 | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-322980 cp ha-322980-m04:/home/docker/cp-test.txt                              | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3379972730/001/cp-test_ha-322980-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n                                                                 | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-322980 cp ha-322980-m04:/home/docker/cp-test.txt                              | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980:/home/docker/cp-test_ha-322980-m04_ha-322980.txt                       |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n                                                                 | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n ha-322980 sudo cat                                              | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | /home/docker/cp-test_ha-322980-m04_ha-322980.txt                                 |           |         |         |                     |                     |
	| cp      | ha-322980 cp ha-322980-m04:/home/docker/cp-test.txt                              | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m02:/home/docker/cp-test_ha-322980-m04_ha-322980-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n                                                                 | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n ha-322980-m02 sudo cat                                          | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | /home/docker/cp-test_ha-322980-m04_ha-322980-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-322980 cp ha-322980-m04:/home/docker/cp-test.txt                              | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m03:/home/docker/cp-test_ha-322980-m04_ha-322980-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n                                                                 | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n ha-322980-m03 sudo cat                                          | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | /home/docker/cp-test_ha-322980-m04_ha-322980-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-322980 node stop m02 -v=7                                                     | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/05 21:15:28
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0505 21:15:28.192694   29367 out.go:291] Setting OutFile to fd 1 ...
	I0505 21:15:28.192822   29367 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 21:15:28.192834   29367 out.go:304] Setting ErrFile to fd 2...
	I0505 21:15:28.192839   29367 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 21:15:28.193040   29367 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18602-11466/.minikube/bin
	I0505 21:15:28.193594   29367 out.go:298] Setting JSON to false
	I0505 21:15:28.194511   29367 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3475,"bootTime":1714940253,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0505 21:15:28.194576   29367 start.go:139] virtualization: kvm guest
	I0505 21:15:28.196753   29367 out.go:177] * [ha-322980] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0505 21:15:28.198175   29367 notify.go:220] Checking for updates...
	I0505 21:15:28.198200   29367 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 21:15:28.199714   29367 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 21:15:28.201298   29367 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18602-11466/kubeconfig
	I0505 21:15:28.202627   29367 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18602-11466/.minikube
	I0505 21:15:28.204102   29367 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0505 21:15:28.205596   29367 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 21:15:28.206976   29367 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 21:15:28.240336   29367 out.go:177] * Using the kvm2 driver based on user configuration
	I0505 21:15:28.241665   29367 start.go:297] selected driver: kvm2
	I0505 21:15:28.241678   29367 start.go:901] validating driver "kvm2" against <nil>
	I0505 21:15:28.241688   29367 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 21:15:28.242280   29367 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 21:15:28.242338   29367 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18602-11466/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0505 21:15:28.256278   29367 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0505 21:15:28.256351   29367 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0505 21:15:28.256556   29367 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0505 21:15:28.256600   29367 cni.go:84] Creating CNI manager for ""
	I0505 21:15:28.256611   29367 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0505 21:15:28.256617   29367 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0505 21:15:28.256669   29367 start.go:340] cluster config:
	{Name:ha-322980 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-322980 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0505 21:15:28.256755   29367 iso.go:125] acquiring lock: {Name:mk05a37c5afd8d748706c016c1665e3846f7161e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 21:15:28.259217   29367 out.go:177] * Starting "ha-322980" primary control-plane node in "ha-322980" cluster
	I0505 21:15:28.260551   29367 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0505 21:15:28.260586   29367 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18602-11466/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0505 21:15:28.260596   29367 cache.go:56] Caching tarball of preloaded images
	I0505 21:15:28.260684   29367 preload.go:173] Found /home/jenkins/minikube-integration/18602-11466/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0505 21:15:28.260695   29367 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0505 21:15:28.260971   29367 profile.go:143] Saving config to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/config.json ...
	I0505 21:15:28.260991   29367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/config.json: {Name:mkcd41b605e73b5e716932d5592f48027cf09c11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 21:15:28.261114   29367 start.go:360] acquireMachinesLock for ha-322980: {Name:mk7f653ea73351d572a4896c6b37d2e7aa44b4ac Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 21:15:28.261142   29367 start.go:364] duration metric: took 14.244µs to acquireMachinesLock for "ha-322980"
	I0505 21:15:28.261158   29367 start.go:93] Provisioning new machine with config: &{Name:ha-322980 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:ha-322980 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0505 21:15:28.261248   29367 start.go:125] createHost starting for "" (driver="kvm2")
	I0505 21:15:28.263067   29367 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0505 21:15:28.263187   29367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:15:28.263229   29367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:15:28.277004   29367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44969
	I0505 21:15:28.277389   29367 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:15:28.278009   29367 main.go:141] libmachine: Using API Version  1
	I0505 21:15:28.278028   29367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:15:28.278337   29367 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:15:28.278503   29367 main.go:141] libmachine: (ha-322980) Calling .GetMachineName
	I0505 21:15:28.278611   29367 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:15:28.278763   29367 start.go:159] libmachine.API.Create for "ha-322980" (driver="kvm2")
	I0505 21:15:28.278784   29367 client.go:168] LocalClient.Create starting
	I0505 21:15:28.278807   29367 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem
	I0505 21:15:28.278833   29367 main.go:141] libmachine: Decoding PEM data...
	I0505 21:15:28.278847   29367 main.go:141] libmachine: Parsing certificate...
	I0505 21:15:28.278893   29367 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem
	I0505 21:15:28.278918   29367 main.go:141] libmachine: Decoding PEM data...
	I0505 21:15:28.278931   29367 main.go:141] libmachine: Parsing certificate...
	I0505 21:15:28.278947   29367 main.go:141] libmachine: Running pre-create checks...
	I0505 21:15:28.278955   29367 main.go:141] libmachine: (ha-322980) Calling .PreCreateCheck
	I0505 21:15:28.279269   29367 main.go:141] libmachine: (ha-322980) Calling .GetConfigRaw
	I0505 21:15:28.279626   29367 main.go:141] libmachine: Creating machine...
	I0505 21:15:28.279639   29367 main.go:141] libmachine: (ha-322980) Calling .Create
	I0505 21:15:28.279750   29367 main.go:141] libmachine: (ha-322980) Creating KVM machine...
	I0505 21:15:28.280835   29367 main.go:141] libmachine: (ha-322980) DBG | found existing default KVM network
	I0505 21:15:28.281458   29367 main.go:141] libmachine: (ha-322980) DBG | I0505 21:15:28.281306   29390 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ac0}
	I0505 21:15:28.281491   29367 main.go:141] libmachine: (ha-322980) DBG | created network xml: 
	I0505 21:15:28.281504   29367 main.go:141] libmachine: (ha-322980) DBG | <network>
	I0505 21:15:28.281520   29367 main.go:141] libmachine: (ha-322980) DBG |   <name>mk-ha-322980</name>
	I0505 21:15:28.281526   29367 main.go:141] libmachine: (ha-322980) DBG |   <dns enable='no'/>
	I0505 21:15:28.281530   29367 main.go:141] libmachine: (ha-322980) DBG |   
	I0505 21:15:28.281539   29367 main.go:141] libmachine: (ha-322980) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0505 21:15:28.281545   29367 main.go:141] libmachine: (ha-322980) DBG |     <dhcp>
	I0505 21:15:28.281552   29367 main.go:141] libmachine: (ha-322980) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0505 21:15:28.281559   29367 main.go:141] libmachine: (ha-322980) DBG |     </dhcp>
	I0505 21:15:28.281564   29367 main.go:141] libmachine: (ha-322980) DBG |   </ip>
	I0505 21:15:28.281569   29367 main.go:141] libmachine: (ha-322980) DBG |   
	I0505 21:15:28.281574   29367 main.go:141] libmachine: (ha-322980) DBG | </network>
	I0505 21:15:28.281581   29367 main.go:141] libmachine: (ha-322980) DBG | 
	I0505 21:15:28.286231   29367 main.go:141] libmachine: (ha-322980) DBG | trying to create private KVM network mk-ha-322980 192.168.39.0/24...
	I0505 21:15:28.349262   29367 main.go:141] libmachine: (ha-322980) DBG | private KVM network mk-ha-322980 192.168.39.0/24 created
	I0505 21:15:28.349288   29367 main.go:141] libmachine: (ha-322980) DBG | I0505 21:15:28.349223   29390 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18602-11466/.minikube
	I0505 21:15:28.349301   29367 main.go:141] libmachine: (ha-322980) Setting up store path in /home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980 ...
	I0505 21:15:28.349318   29367 main.go:141] libmachine: (ha-322980) Building disk image from file:///home/jenkins/minikube-integration/18602-11466/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso
	I0505 21:15:28.349344   29367 main.go:141] libmachine: (ha-322980) Downloading /home/jenkins/minikube-integration/18602-11466/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18602-11466/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso...
	I0505 21:15:28.575989   29367 main.go:141] libmachine: (ha-322980) DBG | I0505 21:15:28.575855   29390 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa...
	I0505 21:15:28.638991   29367 main.go:141] libmachine: (ha-322980) DBG | I0505 21:15:28.638848   29390 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/ha-322980.rawdisk...
	I0505 21:15:28.639022   29367 main.go:141] libmachine: (ha-322980) DBG | Writing magic tar header
	I0505 21:15:28.639075   29367 main.go:141] libmachine: (ha-322980) DBG | Writing SSH key tar header
	I0505 21:15:28.639113   29367 main.go:141] libmachine: (ha-322980) Setting executable bit set on /home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980 (perms=drwx------)
	I0505 21:15:28.639131   29367 main.go:141] libmachine: (ha-322980) DBG | I0505 21:15:28.638957   29390 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980 ...
	I0505 21:15:28.639141   29367 main.go:141] libmachine: (ha-322980) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980
	I0505 21:15:28.639148   29367 main.go:141] libmachine: (ha-322980) Setting executable bit set on /home/jenkins/minikube-integration/18602-11466/.minikube/machines (perms=drwxr-xr-x)
	I0505 21:15:28.639158   29367 main.go:141] libmachine: (ha-322980) Setting executable bit set on /home/jenkins/minikube-integration/18602-11466/.minikube (perms=drwxr-xr-x)
	I0505 21:15:28.639166   29367 main.go:141] libmachine: (ha-322980) Setting executable bit set on /home/jenkins/minikube-integration/18602-11466 (perms=drwxrwxr-x)
	I0505 21:15:28.639180   29367 main.go:141] libmachine: (ha-322980) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0505 21:15:28.639194   29367 main.go:141] libmachine: (ha-322980) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0505 21:15:28.639208   29367 main.go:141] libmachine: (ha-322980) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18602-11466/.minikube/machines
	I0505 21:15:28.639221   29367 main.go:141] libmachine: (ha-322980) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18602-11466/.minikube
	I0505 21:15:28.639230   29367 main.go:141] libmachine: (ha-322980) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18602-11466
	I0505 21:15:28.639235   29367 main.go:141] libmachine: (ha-322980) Creating domain...
	I0505 21:15:28.639247   29367 main.go:141] libmachine: (ha-322980) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0505 21:15:28.639254   29367 main.go:141] libmachine: (ha-322980) DBG | Checking permissions on dir: /home/jenkins
	I0505 21:15:28.639260   29367 main.go:141] libmachine: (ha-322980) DBG | Checking permissions on dir: /home
	I0505 21:15:28.639265   29367 main.go:141] libmachine: (ha-322980) DBG | Skipping /home - not owner
	I0505 21:15:28.640341   29367 main.go:141] libmachine: (ha-322980) define libvirt domain using xml: 
	I0505 21:15:28.640365   29367 main.go:141] libmachine: (ha-322980) <domain type='kvm'>
	I0505 21:15:28.640396   29367 main.go:141] libmachine: (ha-322980)   <name>ha-322980</name>
	I0505 21:15:28.640419   29367 main.go:141] libmachine: (ha-322980)   <memory unit='MiB'>2200</memory>
	I0505 21:15:28.640435   29367 main.go:141] libmachine: (ha-322980)   <vcpu>2</vcpu>
	I0505 21:15:28.640447   29367 main.go:141] libmachine: (ha-322980)   <features>
	I0505 21:15:28.640460   29367 main.go:141] libmachine: (ha-322980)     <acpi/>
	I0505 21:15:28.640472   29367 main.go:141] libmachine: (ha-322980)     <apic/>
	I0505 21:15:28.640483   29367 main.go:141] libmachine: (ha-322980)     <pae/>
	I0505 21:15:28.640502   29367 main.go:141] libmachine: (ha-322980)     
	I0505 21:15:28.640515   29367 main.go:141] libmachine: (ha-322980)   </features>
	I0505 21:15:28.640525   29367 main.go:141] libmachine: (ha-322980)   <cpu mode='host-passthrough'>
	I0505 21:15:28.640538   29367 main.go:141] libmachine: (ha-322980)   
	I0505 21:15:28.640550   29367 main.go:141] libmachine: (ha-322980)   </cpu>
	I0505 21:15:28.640590   29367 main.go:141] libmachine: (ha-322980)   <os>
	I0505 21:15:28.640634   29367 main.go:141] libmachine: (ha-322980)     <type>hvm</type>
	I0505 21:15:28.640650   29367 main.go:141] libmachine: (ha-322980)     <boot dev='cdrom'/>
	I0505 21:15:28.640722   29367 main.go:141] libmachine: (ha-322980)     <boot dev='hd'/>
	I0505 21:15:28.640747   29367 main.go:141] libmachine: (ha-322980)     <bootmenu enable='no'/>
	I0505 21:15:28.640770   29367 main.go:141] libmachine: (ha-322980)   </os>
	I0505 21:15:28.640791   29367 main.go:141] libmachine: (ha-322980)   <devices>
	I0505 21:15:28.640803   29367 main.go:141] libmachine: (ha-322980)     <disk type='file' device='cdrom'>
	I0505 21:15:28.640811   29367 main.go:141] libmachine: (ha-322980)       <source file='/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/boot2docker.iso'/>
	I0505 21:15:28.640821   29367 main.go:141] libmachine: (ha-322980)       <target dev='hdc' bus='scsi'/>
	I0505 21:15:28.640837   29367 main.go:141] libmachine: (ha-322980)       <readonly/>
	I0505 21:15:28.640848   29367 main.go:141] libmachine: (ha-322980)     </disk>
	I0505 21:15:28.640857   29367 main.go:141] libmachine: (ha-322980)     <disk type='file' device='disk'>
	I0505 21:15:28.640872   29367 main.go:141] libmachine: (ha-322980)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0505 21:15:28.640899   29367 main.go:141] libmachine: (ha-322980)       <source file='/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/ha-322980.rawdisk'/>
	I0505 21:15:28.640923   29367 main.go:141] libmachine: (ha-322980)       <target dev='hda' bus='virtio'/>
	I0505 21:15:28.640936   29367 main.go:141] libmachine: (ha-322980)     </disk>
	I0505 21:15:28.640946   29367 main.go:141] libmachine: (ha-322980)     <interface type='network'>
	I0505 21:15:28.640961   29367 main.go:141] libmachine: (ha-322980)       <source network='mk-ha-322980'/>
	I0505 21:15:28.640973   29367 main.go:141] libmachine: (ha-322980)       <model type='virtio'/>
	I0505 21:15:28.640985   29367 main.go:141] libmachine: (ha-322980)     </interface>
	I0505 21:15:28.641002   29367 main.go:141] libmachine: (ha-322980)     <interface type='network'>
	I0505 21:15:28.641017   29367 main.go:141] libmachine: (ha-322980)       <source network='default'/>
	I0505 21:15:28.641027   29367 main.go:141] libmachine: (ha-322980)       <model type='virtio'/>
	I0505 21:15:28.641037   29367 main.go:141] libmachine: (ha-322980)     </interface>
	I0505 21:15:28.641049   29367 main.go:141] libmachine: (ha-322980)     <serial type='pty'>
	I0505 21:15:28.641069   29367 main.go:141] libmachine: (ha-322980)       <target port='0'/>
	I0505 21:15:28.641077   29367 main.go:141] libmachine: (ha-322980)     </serial>
	I0505 21:15:28.641083   29367 main.go:141] libmachine: (ha-322980)     <console type='pty'>
	I0505 21:15:28.641090   29367 main.go:141] libmachine: (ha-322980)       <target type='serial' port='0'/>
	I0505 21:15:28.641097   29367 main.go:141] libmachine: (ha-322980)     </console>
	I0505 21:15:28.641103   29367 main.go:141] libmachine: (ha-322980)     <rng model='virtio'>
	I0505 21:15:28.641109   29367 main.go:141] libmachine: (ha-322980)       <backend model='random'>/dev/random</backend>
	I0505 21:15:28.641116   29367 main.go:141] libmachine: (ha-322980)     </rng>
	I0505 21:15:28.641121   29367 main.go:141] libmachine: (ha-322980)     
	I0505 21:15:28.641130   29367 main.go:141] libmachine: (ha-322980)     
	I0505 21:15:28.641138   29367 main.go:141] libmachine: (ha-322980)   </devices>
	I0505 21:15:28.641142   29367 main.go:141] libmachine: (ha-322980) </domain>
	I0505 21:15:28.641166   29367 main.go:141] libmachine: (ha-322980) 
	I0505 21:15:28.645282   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:1e:18:46 in network default
	I0505 21:15:28.645839   29367 main.go:141] libmachine: (ha-322980) Ensuring networks are active...
	I0505 21:15:28.645853   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:28.646494   29367 main.go:141] libmachine: (ha-322980) Ensuring network default is active
	I0505 21:15:28.646824   29367 main.go:141] libmachine: (ha-322980) Ensuring network mk-ha-322980 is active
	I0505 21:15:28.647503   29367 main.go:141] libmachine: (ha-322980) Getting domain xml...
	I0505 21:15:28.648454   29367 main.go:141] libmachine: (ha-322980) Creating domain...
	I0505 21:15:29.809417   29367 main.go:141] libmachine: (ha-322980) Waiting to get IP...
	I0505 21:15:29.810285   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:29.810703   29367 main.go:141] libmachine: (ha-322980) DBG | unable to find current IP address of domain ha-322980 in network mk-ha-322980
	I0505 21:15:29.810752   29367 main.go:141] libmachine: (ha-322980) DBG | I0505 21:15:29.810700   29390 retry.go:31] will retry after 224.872521ms: waiting for machine to come up
	I0505 21:15:30.037302   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:30.037791   29367 main.go:141] libmachine: (ha-322980) DBG | unable to find current IP address of domain ha-322980 in network mk-ha-322980
	I0505 21:15:30.037814   29367 main.go:141] libmachine: (ha-322980) DBG | I0505 21:15:30.037752   29390 retry.go:31] will retry after 295.377047ms: waiting for machine to come up
	I0505 21:15:30.335326   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:30.335810   29367 main.go:141] libmachine: (ha-322980) DBG | unable to find current IP address of domain ha-322980 in network mk-ha-322980
	I0505 21:15:30.335840   29367 main.go:141] libmachine: (ha-322980) DBG | I0505 21:15:30.335751   29390 retry.go:31] will retry after 344.396951ms: waiting for machine to come up
	I0505 21:15:30.682167   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:30.682556   29367 main.go:141] libmachine: (ha-322980) DBG | unable to find current IP address of domain ha-322980 in network mk-ha-322980
	I0505 21:15:30.682601   29367 main.go:141] libmachine: (ha-322980) DBG | I0505 21:15:30.682539   29390 retry.go:31] will retry after 436.748422ms: waiting for machine to come up
	I0505 21:15:31.121290   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:31.121701   29367 main.go:141] libmachine: (ha-322980) DBG | unable to find current IP address of domain ha-322980 in network mk-ha-322980
	I0505 21:15:31.121730   29367 main.go:141] libmachine: (ha-322980) DBG | I0505 21:15:31.121670   29390 retry.go:31] will retry after 732.144029ms: waiting for machine to come up
	I0505 21:15:31.855412   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:31.855798   29367 main.go:141] libmachine: (ha-322980) DBG | unable to find current IP address of domain ha-322980 in network mk-ha-322980
	I0505 21:15:31.855827   29367 main.go:141] libmachine: (ha-322980) DBG | I0505 21:15:31.855742   29390 retry.go:31] will retry after 897.748028ms: waiting for machine to come up
	I0505 21:15:32.754714   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:32.755252   29367 main.go:141] libmachine: (ha-322980) DBG | unable to find current IP address of domain ha-322980 in network mk-ha-322980
	I0505 21:15:32.755296   29367 main.go:141] libmachine: (ha-322980) DBG | I0505 21:15:32.755209   29390 retry.go:31] will retry after 944.202996ms: waiting for machine to come up
	I0505 21:15:33.701028   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:33.701492   29367 main.go:141] libmachine: (ha-322980) DBG | unable to find current IP address of domain ha-322980 in network mk-ha-322980
	I0505 21:15:33.701524   29367 main.go:141] libmachine: (ha-322980) DBG | I0505 21:15:33.701454   29390 retry.go:31] will retry after 926.520724ms: waiting for machine to come up
	I0505 21:15:34.629504   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:34.629929   29367 main.go:141] libmachine: (ha-322980) DBG | unable to find current IP address of domain ha-322980 in network mk-ha-322980
	I0505 21:15:34.629958   29367 main.go:141] libmachine: (ha-322980) DBG | I0505 21:15:34.629897   29390 retry.go:31] will retry after 1.386455445s: waiting for machine to come up
	I0505 21:15:36.018319   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:36.018716   29367 main.go:141] libmachine: (ha-322980) DBG | unable to find current IP address of domain ha-322980 in network mk-ha-322980
	I0505 21:15:36.018744   29367 main.go:141] libmachine: (ha-322980) DBG | I0505 21:15:36.018672   29390 retry.go:31] will retry after 1.708193894s: waiting for machine to come up
	I0505 21:15:37.728811   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:37.729339   29367 main.go:141] libmachine: (ha-322980) DBG | unable to find current IP address of domain ha-322980 in network mk-ha-322980
	I0505 21:15:37.729369   29367 main.go:141] libmachine: (ha-322980) DBG | I0505 21:15:37.729277   29390 retry.go:31] will retry after 2.129933651s: waiting for machine to come up
	I0505 21:15:39.861508   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:39.861977   29367 main.go:141] libmachine: (ha-322980) DBG | unable to find current IP address of domain ha-322980 in network mk-ha-322980
	I0505 21:15:39.862013   29367 main.go:141] libmachine: (ha-322980) DBG | I0505 21:15:39.861925   29390 retry.go:31] will retry after 3.149022906s: waiting for machine to come up
	I0505 21:15:43.014261   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:43.014694   29367 main.go:141] libmachine: (ha-322980) DBG | unable to find current IP address of domain ha-322980 in network mk-ha-322980
	I0505 21:15:43.014726   29367 main.go:141] libmachine: (ha-322980) DBG | I0505 21:15:43.014669   29390 retry.go:31] will retry after 3.501000441s: waiting for machine to come up
	I0505 21:15:46.520000   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:46.520497   29367 main.go:141] libmachine: (ha-322980) DBG | unable to find current IP address of domain ha-322980 in network mk-ha-322980
	I0505 21:15:46.520523   29367 main.go:141] libmachine: (ha-322980) DBG | I0505 21:15:46.520460   29390 retry.go:31] will retry after 5.233613527s: waiting for machine to come up
	I0505 21:15:51.757587   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:51.758063   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has current primary IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:51.758085   29367 main.go:141] libmachine: (ha-322980) Found IP for machine: 192.168.39.178
	I0505 21:15:51.758095   29367 main.go:141] libmachine: (ha-322980) Reserving static IP address...
	I0505 21:15:51.758503   29367 main.go:141] libmachine: (ha-322980) DBG | unable to find host DHCP lease matching {name: "ha-322980", mac: "52:54:00:b4:13:35", ip: "192.168.39.178"} in network mk-ha-322980
	I0505 21:15:51.828261   29367 main.go:141] libmachine: (ha-322980) Reserved static IP address: 192.168.39.178
	I0505 21:15:51.828288   29367 main.go:141] libmachine: (ha-322980) Waiting for SSH to be available...
	I0505 21:15:51.828298   29367 main.go:141] libmachine: (ha-322980) DBG | Getting to WaitForSSH function...
	I0505 21:15:51.830888   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:51.831206   29367 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b4:13:35}
	I0505 21:15:51.831227   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:51.831458   29367 main.go:141] libmachine: (ha-322980) DBG | Using SSH client type: external
	I0505 21:15:51.831499   29367 main.go:141] libmachine: (ha-322980) DBG | Using SSH private key: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa (-rw-------)
	I0505 21:15:51.831531   29367 main.go:141] libmachine: (ha-322980) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.178 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0505 21:15:51.831545   29367 main.go:141] libmachine: (ha-322980) DBG | About to run SSH command:
	I0505 21:15:51.831557   29367 main.go:141] libmachine: (ha-322980) DBG | exit 0
	I0505 21:15:51.963706   29367 main.go:141] libmachine: (ha-322980) DBG | SSH cmd err, output: <nil>: 
	I0505 21:15:51.963939   29367 main.go:141] libmachine: (ha-322980) KVM machine creation complete!
	I0505 21:15:51.964298   29367 main.go:141] libmachine: (ha-322980) Calling .GetConfigRaw
	I0505 21:15:51.964922   29367 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:15:51.965126   29367 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:15:51.965287   29367 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0505 21:15:51.965302   29367 main.go:141] libmachine: (ha-322980) Calling .GetState
	I0505 21:15:51.966422   29367 main.go:141] libmachine: Detecting operating system of created instance...
	I0505 21:15:51.966438   29367 main.go:141] libmachine: Waiting for SSH to be available...
	I0505 21:15:51.966446   29367 main.go:141] libmachine: Getting to WaitForSSH function...
	I0505 21:15:51.966454   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:15:51.968657   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:51.968955   29367 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:15:51.969006   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:51.969066   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:15:51.969215   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:15:51.969330   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:15:51.969494   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:15:51.969595   29367 main.go:141] libmachine: Using SSH client type: native
	I0505 21:15:51.969765   29367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0505 21:15:51.969776   29367 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0505 21:15:52.079133   29367 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0505 21:15:52.079164   29367 main.go:141] libmachine: Detecting the provisioner...
	I0505 21:15:52.079172   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:15:52.081815   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:52.082187   29367 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:15:52.082216   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:52.082460   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:15:52.082660   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:15:52.082896   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:15:52.083061   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:15:52.083231   29367 main.go:141] libmachine: Using SSH client type: native
	I0505 21:15:52.083444   29367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0505 21:15:52.083458   29367 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0505 21:15:52.192292   29367 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0505 21:15:52.192350   29367 main.go:141] libmachine: found compatible host: buildroot
	I0505 21:15:52.192359   29367 main.go:141] libmachine: Provisioning with buildroot...
	I0505 21:15:52.192370   29367 main.go:141] libmachine: (ha-322980) Calling .GetMachineName
	I0505 21:15:52.192643   29367 buildroot.go:166] provisioning hostname "ha-322980"
	I0505 21:15:52.192662   29367 main.go:141] libmachine: (ha-322980) Calling .GetMachineName
	I0505 21:15:52.192841   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:15:52.195494   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:52.195879   29367 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:15:52.195898   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:52.196101   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:15:52.196276   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:15:52.196417   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:15:52.196534   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:15:52.196696   29367 main.go:141] libmachine: Using SSH client type: native
	I0505 21:15:52.196858   29367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0505 21:15:52.196868   29367 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-322980 && echo "ha-322980" | sudo tee /etc/hostname
	I0505 21:15:52.319248   29367 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-322980
	
	I0505 21:15:52.319297   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:15:52.321946   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:52.322311   29367 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:15:52.322338   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:52.322499   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:15:52.322732   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:15:52.322864   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:15:52.323023   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:15:52.323163   29367 main.go:141] libmachine: Using SSH client type: native
	I0505 21:15:52.323366   29367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0505 21:15:52.323392   29367 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-322980' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-322980/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-322980' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0505 21:15:52.441696   29367 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0505 21:15:52.441734   29367 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18602-11466/.minikube CaCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18602-11466/.minikube}
	I0505 21:15:52.441772   29367 buildroot.go:174] setting up certificates
	I0505 21:15:52.441783   29367 provision.go:84] configureAuth start
	I0505 21:15:52.441792   29367 main.go:141] libmachine: (ha-322980) Calling .GetMachineName
	I0505 21:15:52.442117   29367 main.go:141] libmachine: (ha-322980) Calling .GetIP
	I0505 21:15:52.444978   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:52.445360   29367 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:15:52.445391   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:52.445545   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:15:52.447772   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:52.448155   29367 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:15:52.448193   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:52.448203   29367 provision.go:143] copyHostCerts
	I0505 21:15:52.448245   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem
	I0505 21:15:52.448275   29367 exec_runner.go:144] found /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem, removing ...
	I0505 21:15:52.448284   29367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem
	I0505 21:15:52.448352   29367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem (1078 bytes)
	I0505 21:15:52.448435   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem
	I0505 21:15:52.448454   29367 exec_runner.go:144] found /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem, removing ...
	I0505 21:15:52.448462   29367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem
	I0505 21:15:52.448504   29367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem (1123 bytes)
	I0505 21:15:52.448562   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem
	I0505 21:15:52.448582   29367 exec_runner.go:144] found /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem, removing ...
	I0505 21:15:52.448589   29367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem
	I0505 21:15:52.448620   29367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem (1675 bytes)
	I0505 21:15:52.448701   29367 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem org=jenkins.ha-322980 san=[127.0.0.1 192.168.39.178 ha-322980 localhost minikube]
	I0505 21:15:52.539458   29367 provision.go:177] copyRemoteCerts
	I0505 21:15:52.539531   29367 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0505 21:15:52.539554   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:15:52.542206   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:52.542557   29367 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:15:52.542582   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:52.542752   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:15:52.542925   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:15:52.543062   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:15:52.543179   29367 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa Username:docker}
	I0505 21:15:52.628431   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0505 21:15:52.628506   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0505 21:15:52.655798   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0505 21:15:52.655877   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0505 21:15:52.681175   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0505 21:15:52.681258   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0505 21:15:52.706740   29367 provision.go:87] duration metric: took 264.947145ms to configureAuth
	I0505 21:15:52.706766   29367 buildroot.go:189] setting minikube options for container-runtime
	I0505 21:15:52.706930   29367 config.go:182] Loaded profile config "ha-322980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 21:15:52.706995   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:15:52.709586   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:52.709960   29367 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:15:52.709990   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:52.710162   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:15:52.710322   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:15:52.710478   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:15:52.710570   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:15:52.710696   29367 main.go:141] libmachine: Using SSH client type: native
	I0505 21:15:52.710859   29367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0505 21:15:52.710875   29367 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0505 21:15:53.006304   29367 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0505 21:15:53.006333   29367 main.go:141] libmachine: Checking connection to Docker...
	I0505 21:15:53.006358   29367 main.go:141] libmachine: (ha-322980) Calling .GetURL
	I0505 21:15:53.007738   29367 main.go:141] libmachine: (ha-322980) DBG | Using libvirt version 6000000
	I0505 21:15:53.011167   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:53.011587   29367 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:15:53.011610   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:53.011767   29367 main.go:141] libmachine: Docker is up and running!
	I0505 21:15:53.011809   29367 main.go:141] libmachine: Reticulating splines...
	I0505 21:15:53.011819   29367 client.go:171] duration metric: took 24.733029739s to LocalClient.Create
	I0505 21:15:53.011841   29367 start.go:167] duration metric: took 24.733077709s to libmachine.API.Create "ha-322980"
	I0505 21:15:53.011854   29367 start.go:293] postStartSetup for "ha-322980" (driver="kvm2")
	I0505 21:15:53.011867   29367 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0505 21:15:53.011882   29367 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:15:53.012119   29367 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0505 21:15:53.012143   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:15:53.014385   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:53.014755   29367 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:15:53.014781   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:53.015014   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:15:53.015207   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:15:53.015495   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:15:53.015629   29367 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa Username:docker}
	I0505 21:15:53.099090   29367 ssh_runner.go:195] Run: cat /etc/os-release
	I0505 21:15:53.103691   29367 info.go:137] Remote host: Buildroot 2023.02.9
	I0505 21:15:53.103710   29367 filesync.go:126] Scanning /home/jenkins/minikube-integration/18602-11466/.minikube/addons for local assets ...
	I0505 21:15:53.103760   29367 filesync.go:126] Scanning /home/jenkins/minikube-integration/18602-11466/.minikube/files for local assets ...
	I0505 21:15:53.103845   29367 filesync.go:149] local asset: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem -> 187982.pem in /etc/ssl/certs
	I0505 21:15:53.103856   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem -> /etc/ssl/certs/187982.pem
	I0505 21:15:53.103945   29367 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0505 21:15:53.114809   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem --> /etc/ssl/certs/187982.pem (1708 bytes)
	I0505 21:15:53.139829   29367 start.go:296] duration metric: took 127.963218ms for postStartSetup
	I0505 21:15:53.139873   29367 main.go:141] libmachine: (ha-322980) Calling .GetConfigRaw
	I0505 21:15:53.140452   29367 main.go:141] libmachine: (ha-322980) Calling .GetIP
	I0505 21:15:53.143012   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:53.143276   29367 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:15:53.143294   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:53.143579   29367 profile.go:143] Saving config to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/config.json ...
	I0505 21:15:53.143789   29367 start.go:128] duration metric: took 24.882530508s to createHost
	I0505 21:15:53.143822   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:15:53.146037   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:53.146352   29367 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:15:53.146379   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:53.146527   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:15:53.146704   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:15:53.146847   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:15:53.146984   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:15:53.147126   29367 main.go:141] libmachine: Using SSH client type: native
	I0505 21:15:53.147322   29367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0505 21:15:53.147339   29367 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0505 21:15:53.256861   29367 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714943753.206706515
	
	I0505 21:15:53.256880   29367 fix.go:216] guest clock: 1714943753.206706515
	I0505 21:15:53.256887   29367 fix.go:229] Guest: 2024-05-05 21:15:53.206706515 +0000 UTC Remote: 2024-05-05 21:15:53.14380974 +0000 UTC m=+25.006569318 (delta=62.896775ms)
	I0505 21:15:53.256905   29367 fix.go:200] guest clock delta is within tolerance: 62.896775ms
	I0505 21:15:53.256911   29367 start.go:83] releasing machines lock for "ha-322980", held for 24.995760647s
	I0505 21:15:53.256934   29367 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:15:53.257228   29367 main.go:141] libmachine: (ha-322980) Calling .GetIP
	I0505 21:15:53.259522   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:53.259876   29367 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:15:53.259902   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:53.260008   29367 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:15:53.260428   29367 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:15:53.260593   29367 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:15:53.260708   29367 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0505 21:15:53.260753   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:15:53.260808   29367 ssh_runner.go:195] Run: cat /version.json
	I0505 21:15:53.260841   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:15:53.263354   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:53.263387   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:53.263695   29367 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:15:53.263719   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:53.263744   29367 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:15:53.263759   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:53.263866   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:15:53.264048   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:15:53.264065   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:15:53.264201   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:15:53.264218   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:15:53.264310   29367 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa Username:docker}
	I0505 21:15:53.264387   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:15:53.264498   29367 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa Username:docker}
	I0505 21:15:53.368839   29367 ssh_runner.go:195] Run: systemctl --version
	I0505 21:15:53.375745   29367 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0505 21:15:53.548045   29367 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0505 21:15:53.554925   29367 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0505 21:15:53.554995   29367 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0505 21:15:53.575884   29367 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0505 21:15:53.575902   29367 start.go:494] detecting cgroup driver to use...
	I0505 21:15:53.575948   29367 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0505 21:15:53.595546   29367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0505 21:15:53.610574   29367 docker.go:217] disabling cri-docker service (if available) ...
	I0505 21:15:53.610629   29367 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0505 21:15:53.625764   29367 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0505 21:15:53.640786   29367 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0505 21:15:53.762725   29367 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0505 21:15:53.950332   29367 docker.go:233] disabling docker service ...
	I0505 21:15:53.950389   29367 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0505 21:15:53.966703   29367 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0505 21:15:53.981102   29367 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0505 21:15:54.118651   29367 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0505 21:15:54.236140   29367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0505 21:15:54.251750   29367 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0505 21:15:54.273464   29367 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0505 21:15:54.273533   29367 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:15:54.285094   29367 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0505 21:15:54.285185   29367 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:15:54.297250   29367 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:15:54.308936   29367 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:15:54.323138   29367 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0505 21:15:54.337480   29367 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:15:54.350674   29367 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:15:54.370496   29367 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:15:54.382773   29367 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0505 21:15:54.394261   29367 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0505 21:15:54.394327   29367 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0505 21:15:54.410065   29367 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0505 21:15:54.421371   29367 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 21:15:54.533560   29367 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0505 21:15:54.689822   29367 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0505 21:15:54.689886   29367 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0505 21:15:54.696023   29367 start.go:562] Will wait 60s for crictl version
	I0505 21:15:54.696071   29367 ssh_runner.go:195] Run: which crictl
	I0505 21:15:54.700847   29367 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0505 21:15:54.751750   29367 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0505 21:15:54.751846   29367 ssh_runner.go:195] Run: crio --version
	I0505 21:15:54.786179   29367 ssh_runner.go:195] Run: crio --version
	I0505 21:15:54.823252   29367 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0505 21:15:54.824391   29367 main.go:141] libmachine: (ha-322980) Calling .GetIP
	I0505 21:15:54.827175   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:54.827512   29367 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:15:54.827542   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:54.827740   29367 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0505 21:15:54.832212   29367 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0505 21:15:54.847192   29367 kubeadm.go:877] updating cluster {Name:ha-322980 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cl
usterName:ha-322980 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.178 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0505 21:15:54.847291   29367 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0505 21:15:54.847335   29367 ssh_runner.go:195] Run: sudo crictl images --output json
	I0505 21:15:54.882126   29367 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0505 21:15:54.882179   29367 ssh_runner.go:195] Run: which lz4
	I0505 21:15:54.886447   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0505 21:15:54.886534   29367 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0505 21:15:54.891461   29367 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0505 21:15:54.891489   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0505 21:15:56.548982   29367 crio.go:462] duration metric: took 1.662478276s to copy over tarball
	I0505 21:15:56.549054   29367 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0505 21:15:59.170048   29367 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.620958409s)
	I0505 21:15:59.170082   29367 crio.go:469] duration metric: took 2.621068356s to extract the tarball
	I0505 21:15:59.170090   29367 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0505 21:15:59.212973   29367 ssh_runner.go:195] Run: sudo crictl images --output json
	I0505 21:15:59.267250   29367 crio.go:514] all images are preloaded for cri-o runtime.
	I0505 21:15:59.267269   29367 cache_images.go:84] Images are preloaded, skipping loading
	I0505 21:15:59.267276   29367 kubeadm.go:928] updating node { 192.168.39.178 8443 v1.30.0 crio true true} ...
	I0505 21:15:59.267364   29367 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-322980 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.178
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-322980 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0505 21:15:59.267439   29367 ssh_runner.go:195] Run: crio config
	I0505 21:15:59.315965   29367 cni.go:84] Creating CNI manager for ""
	I0505 21:15:59.315986   29367 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0505 21:15:59.315996   29367 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0505 21:15:59.316020   29367 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.178 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-322980 NodeName:ha-322980 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.178"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.178 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0505 21:15:59.316171   29367 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.178
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-322980"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.178
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.178"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0505 21:15:59.316207   29367 kube-vip.go:111] generating kube-vip config ...
	I0505 21:15:59.316259   29367 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0505 21:15:59.342014   29367 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0505 21:15:59.342129   29367 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0505 21:15:59.342205   29367 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0505 21:15:59.354767   29367 binaries.go:44] Found k8s binaries, skipping transfer
	I0505 21:15:59.354825   29367 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0505 21:15:59.367195   29367 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0505 21:15:59.387633   29367 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0505 21:15:59.407122   29367 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0505 21:15:59.426762   29367 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0505 21:15:59.446645   29367 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0505 21:15:59.451385   29367 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0505 21:15:59.466763   29367 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 21:15:59.592147   29367 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0505 21:15:59.611747   29367 certs.go:68] Setting up /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980 for IP: 192.168.39.178
	I0505 21:15:59.611768   29367 certs.go:194] generating shared ca certs ...
	I0505 21:15:59.611781   29367 certs.go:226] acquiring lock for ca certs: {Name:mk9a8f97724855697074b08194c6247f8f9e5c84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 21:15:59.611944   29367 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key
	I0505 21:15:59.611995   29367 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key
	I0505 21:15:59.612009   29367 certs.go:256] generating profile certs ...
	I0505 21:15:59.612081   29367 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/client.key
	I0505 21:15:59.612104   29367 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/client.crt with IP's: []
	I0505 21:15:59.789220   29367 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/client.crt ...
	I0505 21:15:59.789246   29367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/client.crt: {Name:mkb9b4c515630ef7d7577699d1dd0f62181a2e95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 21:15:59.789421   29367 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/client.key ...
	I0505 21:15:59.789434   29367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/client.key: {Name:mk3d64e88d4cf5cb8950198d8016844ad9d51ad4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 21:15:59.789530   29367 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key.77cbfdd1
	I0505 21:15:59.789552   29367 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt.77cbfdd1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.178 192.168.39.254]
	I0505 21:15:59.929903   29367 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt.77cbfdd1 ...
	I0505 21:15:59.929930   29367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt.77cbfdd1: {Name:mk9f7624fdabd39cce044f7ff8479aed79f944ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 21:15:59.930123   29367 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key.77cbfdd1 ...
	I0505 21:15:59.930139   29367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key.77cbfdd1: {Name:mk9061c1eb79654726a0dd80d3f445c84d886d55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 21:15:59.930235   29367 certs.go:381] copying /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt.77cbfdd1 -> /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt
	I0505 21:15:59.930309   29367 certs.go:385] copying /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key.77cbfdd1 -> /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key
	I0505 21:15:59.930361   29367 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.key
	I0505 21:15:59.930375   29367 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.crt with IP's: []
	I0505 21:16:00.114106   29367 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.crt ...
	I0505 21:16:00.114134   29367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.crt: {Name:mkbc3987c5d5fa173c87a9b09d862fa07695ac93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 21:16:00.114314   29367 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.key ...
	I0505 21:16:00.114329   29367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.key: {Name:mk7cdbe77608aed5ce72b4baebcbf84870ae6fa7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 21:16:00.114426   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0505 21:16:00.114445   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0505 21:16:00.114456   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0505 21:16:00.114469   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0505 21:16:00.114481   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0505 21:16:00.114500   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0505 21:16:00.114516   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0505 21:16:00.114533   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0505 21:16:00.114600   29367 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798.pem (1338 bytes)
	W0505 21:16:00.114633   29367 certs.go:480] ignoring /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798_empty.pem, impossibly tiny 0 bytes
	I0505 21:16:00.114646   29367 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem (1675 bytes)
	I0505 21:16:00.114680   29367 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem (1078 bytes)
	I0505 21:16:00.114702   29367 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem (1123 bytes)
	I0505 21:16:00.114722   29367 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem (1675 bytes)
	I0505 21:16:00.114761   29367 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem (1708 bytes)
	I0505 21:16:00.114805   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem -> /usr/share/ca-certificates/187982.pem
	I0505 21:16:00.114828   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0505 21:16:00.114842   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798.pem -> /usr/share/ca-certificates/18798.pem
	I0505 21:16:00.115355   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0505 21:16:00.150022   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0505 21:16:00.181766   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0505 21:16:00.215392   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0505 21:16:00.246046   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0505 21:16:00.276357   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0505 21:16:00.303779   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0505 21:16:00.331749   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0505 21:16:00.357748   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem --> /usr/share/ca-certificates/187982.pem (1708 bytes)
	I0505 21:16:00.387589   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0505 21:16:00.414236   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798.pem --> /usr/share/ca-certificates/18798.pem (1338 bytes)
	I0505 21:16:00.440055   29367 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0505 21:16:00.458944   29367 ssh_runner.go:195] Run: openssl version
	I0505 21:16:00.465242   29367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/187982.pem && ln -fs /usr/share/ca-certificates/187982.pem /etc/ssl/certs/187982.pem"
	I0505 21:16:00.478123   29367 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/187982.pem
	I0505 21:16:00.482993   29367 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  5 21:11 /usr/share/ca-certificates/187982.pem
	I0505 21:16:00.483037   29367 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/187982.pem
	I0505 21:16:00.489225   29367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/187982.pem /etc/ssl/certs/3ec20f2e.0"
	I0505 21:16:00.501929   29367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0505 21:16:00.515529   29367 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0505 21:16:00.520459   29367 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  5 20:58 /usr/share/ca-certificates/minikubeCA.pem
	I0505 21:16:00.520507   29367 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0505 21:16:00.526773   29367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0505 21:16:00.539611   29367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18798.pem && ln -fs /usr/share/ca-certificates/18798.pem /etc/ssl/certs/18798.pem"
	I0505 21:16:00.552758   29367 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18798.pem
	I0505 21:16:00.557535   29367 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  5 21:11 /usr/share/ca-certificates/18798.pem
	I0505 21:16:00.557579   29367 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18798.pem
	I0505 21:16:00.563917   29367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18798.pem /etc/ssl/certs/51391683.0"
	I0505 21:16:00.577907   29367 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0505 21:16:00.582480   29367 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0505 21:16:00.582522   29367 kubeadm.go:391] StartCluster: {Name:ha-322980 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clust
erName:ha-322980 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.178 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 21:16:00.582610   29367 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0505 21:16:00.582676   29367 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0505 21:16:00.624855   29367 cri.go:89] found id: ""
	I0505 21:16:00.624933   29367 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0505 21:16:00.637047   29367 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0505 21:16:00.650968   29367 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0505 21:16:00.663499   29367 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0505 21:16:00.663519   29367 kubeadm.go:156] found existing configuration files:
	
	I0505 21:16:00.663565   29367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0505 21:16:00.675054   29367 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0505 21:16:00.675110   29367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0505 21:16:00.686684   29367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0505 21:16:00.697979   29367 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0505 21:16:00.698033   29367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0505 21:16:00.709267   29367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0505 21:16:00.720257   29367 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0505 21:16:00.720302   29367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0505 21:16:00.731752   29367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0505 21:16:00.742646   29367 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0505 21:16:00.742695   29367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0505 21:16:00.753969   29367 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0505 21:16:00.877747   29367 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0505 21:16:00.877979   29367 kubeadm.go:309] [preflight] Running pre-flight checks
	I0505 21:16:01.027519   29367 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0505 21:16:01.027629   29367 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0505 21:16:01.027768   29367 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0505 21:16:01.253201   29367 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0505 21:16:01.394240   29367 out.go:204]   - Generating certificates and keys ...
	I0505 21:16:01.394379   29367 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0505 21:16:01.394460   29367 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0505 21:16:01.403637   29367 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0505 21:16:01.616128   29367 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0505 21:16:01.992561   29367 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0505 21:16:02.239704   29367 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0505 21:16:02.368329   29367 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0505 21:16:02.368565   29367 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-322980 localhost] and IPs [192.168.39.178 127.0.0.1 ::1]
	I0505 21:16:02.563897   29367 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0505 21:16:02.564112   29367 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-322980 localhost] and IPs [192.168.39.178 127.0.0.1 ::1]
	I0505 21:16:02.730896   29367 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0505 21:16:02.936943   29367 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0505 21:16:03.179224   29367 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0505 21:16:03.179425   29367 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0505 21:16:03.340119   29367 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0505 21:16:03.426263   29367 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0505 21:16:03.564383   29367 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0505 21:16:03.694444   29367 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0505 21:16:03.954715   29367 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0505 21:16:03.955430   29367 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0505 21:16:03.957841   29367 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0505 21:16:03.959513   29367 out.go:204]   - Booting up control plane ...
	I0505 21:16:03.959631   29367 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0505 21:16:03.959742   29367 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0505 21:16:03.960883   29367 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0505 21:16:03.989820   29367 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0505 21:16:03.989937   29367 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0505 21:16:03.989992   29367 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0505 21:16:04.141772   29367 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0505 21:16:04.141912   29367 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0505 21:16:04.643333   29367 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.61592ms
	I0505 21:16:04.643425   29367 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0505 21:16:13.671466   29367 kubeadm.go:309] [api-check] The API server is healthy after 9.027059086s
	I0505 21:16:13.687747   29367 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0505 21:16:13.701785   29367 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0505 21:16:13.732952   29367 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0505 21:16:13.733222   29367 kubeadm.go:309] [mark-control-plane] Marking the node ha-322980 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0505 21:16:13.754735   29367 kubeadm.go:309] [bootstrap-token] Using token: 2zgn2d.a9djy29f23rnuhm1
	I0505 21:16:13.756246   29367 out.go:204]   - Configuring RBAC rules ...
	I0505 21:16:13.756392   29367 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0505 21:16:13.765989   29367 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0505 21:16:13.775726   29367 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0505 21:16:13.782240   29367 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0505 21:16:13.785796   29367 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0505 21:16:13.789688   29367 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0505 21:16:14.080336   29367 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0505 21:16:14.511103   29367 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0505 21:16:15.079442   29367 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0505 21:16:15.080502   29367 kubeadm.go:309] 
	I0505 21:16:15.080583   29367 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0505 21:16:15.080600   29367 kubeadm.go:309] 
	I0505 21:16:15.080671   29367 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0505 21:16:15.080678   29367 kubeadm.go:309] 
	I0505 21:16:15.080723   29367 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0505 21:16:15.080828   29367 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0505 21:16:15.080890   29367 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0505 21:16:15.080906   29367 kubeadm.go:309] 
	I0505 21:16:15.080950   29367 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0505 21:16:15.080956   29367 kubeadm.go:309] 
	I0505 21:16:15.080996   29367 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0505 21:16:15.081004   29367 kubeadm.go:309] 
	I0505 21:16:15.081047   29367 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0505 21:16:15.081153   29367 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0505 21:16:15.081264   29367 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0505 21:16:15.081278   29367 kubeadm.go:309] 
	I0505 21:16:15.081363   29367 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0505 21:16:15.081437   29367 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0505 21:16:15.081444   29367 kubeadm.go:309] 
	I0505 21:16:15.081569   29367 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 2zgn2d.a9djy29f23rnuhm1 \
	I0505 21:16:15.081706   29367 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6a26f18bad1f4f0bdeab00e50a185d34e4c63698b4d623f2ccf5d34207e02541 \
	I0505 21:16:15.081757   29367 kubeadm.go:309] 	--control-plane 
	I0505 21:16:15.081764   29367 kubeadm.go:309] 
	I0505 21:16:15.081874   29367 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0505 21:16:15.081883   29367 kubeadm.go:309] 
	I0505 21:16:15.081965   29367 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 2zgn2d.a9djy29f23rnuhm1 \
	I0505 21:16:15.082059   29367 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6a26f18bad1f4f0bdeab00e50a185d34e4c63698b4d623f2ccf5d34207e02541 
	I0505 21:16:15.082671   29367 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0505 21:16:15.082725   29367 cni.go:84] Creating CNI manager for ""
	I0505 21:16:15.082738   29367 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0505 21:16:15.084351   29367 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0505 21:16:15.085703   29367 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0505 21:16:15.092212   29367 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0505 21:16:15.092228   29367 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0505 21:16:15.114432   29367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0505 21:16:15.477564   29367 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0505 21:16:15.477659   29367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 21:16:15.477698   29367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-322980 minikube.k8s.io/updated_at=2024_05_05T21_16_15_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=182cbbc99574885c654f8e32902368a71f76ddd3 minikube.k8s.io/name=ha-322980 minikube.k8s.io/primary=true
	I0505 21:16:15.749581   29367 ops.go:34] apiserver oom_adj: -16
	I0505 21:16:15.749706   29367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 21:16:16.249813   29367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 21:16:16.750664   29367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 21:16:17.249922   29367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 21:16:17.750161   29367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 21:16:18.249824   29367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 21:16:18.750723   29367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 21:16:19.250399   29367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 21:16:19.750617   29367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 21:16:20.250156   29367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 21:16:20.749934   29367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 21:16:21.249823   29367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 21:16:21.750563   29367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 21:16:22.250502   29367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 21:16:22.750279   29367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 21:16:23.250613   29367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 21:16:23.749792   29367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 21:16:24.250417   29367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 21:16:24.750496   29367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 21:16:25.249969   29367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 21:16:25.382904   29367 kubeadm.go:1107] duration metric: took 9.90531208s to wait for elevateKubeSystemPrivileges
	W0505 21:16:25.382956   29367 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0505 21:16:25.382966   29367 kubeadm.go:393] duration metric: took 24.800444819s to StartCluster
	I0505 21:16:25.382988   29367 settings.go:142] acquiring lock: {Name:mkbe19b7965e4b0b9928cd2b7b56f51dec95b157 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 21:16:25.383079   29367 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18602-11466/kubeconfig
	I0505 21:16:25.383788   29367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/kubeconfig: {Name:mk083e789ff55e795c8f0eb5d298a0e27ad9cdde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 21:16:25.384008   29367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0505 21:16:25.384035   29367 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0505 21:16:25.384010   29367 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.178 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0505 21:16:25.384160   29367 start.go:240] waiting for startup goroutines ...
	I0505 21:16:25.384130   29367 addons.go:69] Setting default-storageclass=true in profile "ha-322980"
	I0505 21:16:25.384221   29367 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-322980"
	I0505 21:16:25.384259   29367 config.go:182] Loaded profile config "ha-322980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 21:16:25.384130   29367 addons.go:69] Setting storage-provisioner=true in profile "ha-322980"
	I0505 21:16:25.384321   29367 addons.go:234] Setting addon storage-provisioner=true in "ha-322980"
	I0505 21:16:25.384352   29367 host.go:66] Checking if "ha-322980" exists ...
	I0505 21:16:25.384717   29367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:16:25.384768   29367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:16:25.384717   29367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:16:25.384836   29367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:16:25.406853   29367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42739
	I0505 21:16:25.406907   29367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36251
	I0505 21:16:25.407353   29367 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:16:25.407405   29367 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:16:25.407888   29367 main.go:141] libmachine: Using API Version  1
	I0505 21:16:25.407916   29367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:16:25.408040   29367 main.go:141] libmachine: Using API Version  1
	I0505 21:16:25.408065   29367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:16:25.408291   29367 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:16:25.408408   29367 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:16:25.408547   29367 main.go:141] libmachine: (ha-322980) Calling .GetState
	I0505 21:16:25.408875   29367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:16:25.408927   29367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:16:25.410823   29367 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18602-11466/kubeconfig
	I0505 21:16:25.411166   29367 kapi.go:59] client config for ha-322980: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/client.crt", KeyFile:"/home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/client.key", CAFile:"/home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02a40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0505 21:16:25.411799   29367 cert_rotation.go:137] Starting client certificate rotation controller
	I0505 21:16:25.412003   29367 addons.go:234] Setting addon default-storageclass=true in "ha-322980"
	I0505 21:16:25.412046   29367 host.go:66] Checking if "ha-322980" exists ...
	I0505 21:16:25.412446   29367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:16:25.412488   29367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:16:25.424430   29367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45069
	I0505 21:16:25.424871   29367 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:16:25.425369   29367 main.go:141] libmachine: Using API Version  1
	I0505 21:16:25.425393   29367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:16:25.425746   29367 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:16:25.425926   29367 main.go:141] libmachine: (ha-322980) Calling .GetState
	I0505 21:16:25.427410   29367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46201
	I0505 21:16:25.427665   29367 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:16:25.427765   29367 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:16:25.429429   29367 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0505 21:16:25.428148   29367 main.go:141] libmachine: Using API Version  1
	I0505 21:16:25.430670   29367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:16:25.430755   29367 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0505 21:16:25.430776   29367 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0505 21:16:25.430797   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:16:25.431020   29367 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:16:25.431657   29367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:16:25.431699   29367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:16:25.433553   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:16:25.433852   29367 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:16:25.433876   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:16:25.433954   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:16:25.434123   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:16:25.434253   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:16:25.434386   29367 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa Username:docker}
	I0505 21:16:25.452948   29367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37633
	I0505 21:16:25.453407   29367 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:16:25.454001   29367 main.go:141] libmachine: Using API Version  1
	I0505 21:16:25.454024   29367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:16:25.454507   29367 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:16:25.454716   29367 main.go:141] libmachine: (ha-322980) Calling .GetState
	I0505 21:16:25.456452   29367 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:16:25.456729   29367 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0505 21:16:25.456743   29367 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0505 21:16:25.456755   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:16:25.460063   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:16:25.460505   29367 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:16:25.460524   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:16:25.460705   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:16:25.460870   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:16:25.461048   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:16:25.461184   29367 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa Username:docker}
	I0505 21:16:25.583707   29367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0505 21:16:25.694966   29367 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0505 21:16:25.785183   29367 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0505 21:16:26.222314   29367 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0505 21:16:26.624176   29367 main.go:141] libmachine: Making call to close driver server
	I0505 21:16:26.624200   29367 main.go:141] libmachine: (ha-322980) Calling .Close
	I0505 21:16:26.624318   29367 main.go:141] libmachine: Making call to close driver server
	I0505 21:16:26.624330   29367 main.go:141] libmachine: (ha-322980) Calling .Close
	I0505 21:16:26.624526   29367 main.go:141] libmachine: Successfully made call to close driver server
	I0505 21:16:26.624546   29367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 21:16:26.624556   29367 main.go:141] libmachine: Making call to close driver server
	I0505 21:16:26.624564   29367 main.go:141] libmachine: (ha-322980) Calling .Close
	I0505 21:16:26.624658   29367 main.go:141] libmachine: (ha-322980) DBG | Closing plugin on server side
	I0505 21:16:26.624710   29367 main.go:141] libmachine: Successfully made call to close driver server
	I0505 21:16:26.624728   29367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 21:16:26.624754   29367 main.go:141] libmachine: Making call to close driver server
	I0505 21:16:26.624763   29367 main.go:141] libmachine: (ha-322980) Calling .Close
	I0505 21:16:26.624823   29367 main.go:141] libmachine: Successfully made call to close driver server
	I0505 21:16:26.624853   29367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 21:16:26.624966   29367 main.go:141] libmachine: (ha-322980) DBG | Closing plugin on server side
	I0505 21:16:26.625009   29367 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0505 21:16:26.625017   29367 round_trippers.go:469] Request Headers:
	I0505 21:16:26.625027   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:16:26.625033   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:16:26.625051   29367 main.go:141] libmachine: (ha-322980) DBG | Closing plugin on server side
	I0505 21:16:26.625133   29367 main.go:141] libmachine: Successfully made call to close driver server
	I0505 21:16:26.625179   29367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 21:16:26.637795   29367 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0505 21:16:26.638368   29367 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0505 21:16:26.638385   29367 round_trippers.go:469] Request Headers:
	I0505 21:16:26.638393   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:16:26.638398   29367 round_trippers.go:473]     Content-Type: application/json
	I0505 21:16:26.638401   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:16:26.641594   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:16:26.642094   29367 main.go:141] libmachine: Making call to close driver server
	I0505 21:16:26.642108   29367 main.go:141] libmachine: (ha-322980) Calling .Close
	I0505 21:16:26.642446   29367 main.go:141] libmachine: Successfully made call to close driver server
	I0505 21:16:26.642466   29367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 21:16:26.644455   29367 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0505 21:16:26.645767   29367 addons.go:510] duration metric: took 1.26173268s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0505 21:16:26.645813   29367 start.go:245] waiting for cluster config update ...
	I0505 21:16:26.645829   29367 start.go:254] writing updated cluster config ...
	I0505 21:16:26.647406   29367 out.go:177] 
	I0505 21:16:26.648783   29367 config.go:182] Loaded profile config "ha-322980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 21:16:26.648891   29367 profile.go:143] Saving config to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/config.json ...
	I0505 21:16:26.650599   29367 out.go:177] * Starting "ha-322980-m02" control-plane node in "ha-322980" cluster
	I0505 21:16:26.652020   29367 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0505 21:16:26.652049   29367 cache.go:56] Caching tarball of preloaded images
	I0505 21:16:26.652154   29367 preload.go:173] Found /home/jenkins/minikube-integration/18602-11466/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0505 21:16:26.652170   29367 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0505 21:16:26.652280   29367 profile.go:143] Saving config to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/config.json ...
	I0505 21:16:26.652499   29367 start.go:360] acquireMachinesLock for ha-322980-m02: {Name:mk7f653ea73351d572a4896c6b37d2e7aa44b4ac Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 21:16:26.652560   29367 start.go:364] duration metric: took 33.568µs to acquireMachinesLock for "ha-322980-m02"
	I0505 21:16:26.652585   29367 start.go:93] Provisioning new machine with config: &{Name:ha-322980 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:ha-322980 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.178 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0505 21:16:26.652691   29367 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0505 21:16:26.654570   29367 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0505 21:16:26.654684   29367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:16:26.654729   29367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:16:26.669319   29367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44057
	I0505 21:16:26.669732   29367 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:16:26.670192   29367 main.go:141] libmachine: Using API Version  1
	I0505 21:16:26.670222   29367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:16:26.670564   29367 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:16:26.670808   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetMachineName
	I0505 21:16:26.670987   29367 main.go:141] libmachine: (ha-322980-m02) Calling .DriverName
	I0505 21:16:26.671171   29367 start.go:159] libmachine.API.Create for "ha-322980" (driver="kvm2")
	I0505 21:16:26.671204   29367 client.go:168] LocalClient.Create starting
	I0505 21:16:26.671243   29367 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem
	I0505 21:16:26.671287   29367 main.go:141] libmachine: Decoding PEM data...
	I0505 21:16:26.671309   29367 main.go:141] libmachine: Parsing certificate...
	I0505 21:16:26.671374   29367 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem
	I0505 21:16:26.671401   29367 main.go:141] libmachine: Decoding PEM data...
	I0505 21:16:26.671418   29367 main.go:141] libmachine: Parsing certificate...
	I0505 21:16:26.671442   29367 main.go:141] libmachine: Running pre-create checks...
	I0505 21:16:26.671454   29367 main.go:141] libmachine: (ha-322980-m02) Calling .PreCreateCheck
	I0505 21:16:26.671672   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetConfigRaw
	I0505 21:16:26.672146   29367 main.go:141] libmachine: Creating machine...
	I0505 21:16:26.672164   29367 main.go:141] libmachine: (ha-322980-m02) Calling .Create
	I0505 21:16:26.672317   29367 main.go:141] libmachine: (ha-322980-m02) Creating KVM machine...
	I0505 21:16:26.673647   29367 main.go:141] libmachine: (ha-322980-m02) DBG | found existing default KVM network
	I0505 21:16:26.673752   29367 main.go:141] libmachine: (ha-322980-m02) DBG | found existing private KVM network mk-ha-322980
	I0505 21:16:26.673890   29367 main.go:141] libmachine: (ha-322980-m02) Setting up store path in /home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m02 ...
	I0505 21:16:26.673913   29367 main.go:141] libmachine: (ha-322980-m02) Building disk image from file:///home/jenkins/minikube-integration/18602-11466/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso
	I0505 21:16:26.673985   29367 main.go:141] libmachine: (ha-322980-m02) DBG | I0505 21:16:26.673869   29784 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18602-11466/.minikube
	I0505 21:16:26.674089   29367 main.go:141] libmachine: (ha-322980-m02) Downloading /home/jenkins/minikube-integration/18602-11466/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18602-11466/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso...
	I0505 21:16:26.889974   29367 main.go:141] libmachine: (ha-322980-m02) DBG | I0505 21:16:26.889821   29784 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m02/id_rsa...
	I0505 21:16:27.045565   29367 main.go:141] libmachine: (ha-322980-m02) DBG | I0505 21:16:27.045423   29784 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m02/ha-322980-m02.rawdisk...
	I0505 21:16:27.045619   29367 main.go:141] libmachine: (ha-322980-m02) DBG | Writing magic tar header
	I0505 21:16:27.045630   29367 main.go:141] libmachine: (ha-322980-m02) DBG | Writing SSH key tar header
	I0505 21:16:27.045643   29367 main.go:141] libmachine: (ha-322980-m02) DBG | I0505 21:16:27.045539   29784 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m02 ...
	I0505 21:16:27.045665   29367 main.go:141] libmachine: (ha-322980-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m02
	I0505 21:16:27.045685   29367 main.go:141] libmachine: (ha-322980-m02) Setting executable bit set on /home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m02 (perms=drwx------)
	I0505 21:16:27.045699   29367 main.go:141] libmachine: (ha-322980-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18602-11466/.minikube/machines
	I0505 21:16:27.045726   29367 main.go:141] libmachine: (ha-322980-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18602-11466/.minikube
	I0505 21:16:27.045735   29367 main.go:141] libmachine: (ha-322980-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18602-11466
	I0505 21:16:27.045749   29367 main.go:141] libmachine: (ha-322980-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0505 21:16:27.045761   29367 main.go:141] libmachine: (ha-322980-m02) DBG | Checking permissions on dir: /home/jenkins
	I0505 21:16:27.045792   29367 main.go:141] libmachine: (ha-322980-m02) Setting executable bit set on /home/jenkins/minikube-integration/18602-11466/.minikube/machines (perms=drwxr-xr-x)
	I0505 21:16:27.045813   29367 main.go:141] libmachine: (ha-322980-m02) DBG | Checking permissions on dir: /home
	I0505 21:16:27.045820   29367 main.go:141] libmachine: (ha-322980-m02) Setting executable bit set on /home/jenkins/minikube-integration/18602-11466/.minikube (perms=drwxr-xr-x)
	I0505 21:16:27.045833   29367 main.go:141] libmachine: (ha-322980-m02) Setting executable bit set on /home/jenkins/minikube-integration/18602-11466 (perms=drwxrwxr-x)
	I0505 21:16:27.045847   29367 main.go:141] libmachine: (ha-322980-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0505 21:16:27.045859   29367 main.go:141] libmachine: (ha-322980-m02) DBG | Skipping /home - not owner
	I0505 21:16:27.045872   29367 main.go:141] libmachine: (ha-322980-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0505 21:16:27.045881   29367 main.go:141] libmachine: (ha-322980-m02) Creating domain...
	I0505 21:16:27.046595   29367 main.go:141] libmachine: (ha-322980-m02) define libvirt domain using xml: 
	I0505 21:16:27.046614   29367 main.go:141] libmachine: (ha-322980-m02) <domain type='kvm'>
	I0505 21:16:27.046623   29367 main.go:141] libmachine: (ha-322980-m02)   <name>ha-322980-m02</name>
	I0505 21:16:27.046634   29367 main.go:141] libmachine: (ha-322980-m02)   <memory unit='MiB'>2200</memory>
	I0505 21:16:27.046646   29367 main.go:141] libmachine: (ha-322980-m02)   <vcpu>2</vcpu>
	I0505 21:16:27.046651   29367 main.go:141] libmachine: (ha-322980-m02)   <features>
	I0505 21:16:27.046659   29367 main.go:141] libmachine: (ha-322980-m02)     <acpi/>
	I0505 21:16:27.046664   29367 main.go:141] libmachine: (ha-322980-m02)     <apic/>
	I0505 21:16:27.046669   29367 main.go:141] libmachine: (ha-322980-m02)     <pae/>
	I0505 21:16:27.046673   29367 main.go:141] libmachine: (ha-322980-m02)     
	I0505 21:16:27.046678   29367 main.go:141] libmachine: (ha-322980-m02)   </features>
	I0505 21:16:27.046686   29367 main.go:141] libmachine: (ha-322980-m02)   <cpu mode='host-passthrough'>
	I0505 21:16:27.046692   29367 main.go:141] libmachine: (ha-322980-m02)   
	I0505 21:16:27.046702   29367 main.go:141] libmachine: (ha-322980-m02)   </cpu>
	I0505 21:16:27.046722   29367 main.go:141] libmachine: (ha-322980-m02)   <os>
	I0505 21:16:27.046740   29367 main.go:141] libmachine: (ha-322980-m02)     <type>hvm</type>
	I0505 21:16:27.046752   29367 main.go:141] libmachine: (ha-322980-m02)     <boot dev='cdrom'/>
	I0505 21:16:27.046765   29367 main.go:141] libmachine: (ha-322980-m02)     <boot dev='hd'/>
	I0505 21:16:27.046775   29367 main.go:141] libmachine: (ha-322980-m02)     <bootmenu enable='no'/>
	I0505 21:16:27.046781   29367 main.go:141] libmachine: (ha-322980-m02)   </os>
	I0505 21:16:27.046786   29367 main.go:141] libmachine: (ha-322980-m02)   <devices>
	I0505 21:16:27.046795   29367 main.go:141] libmachine: (ha-322980-m02)     <disk type='file' device='cdrom'>
	I0505 21:16:27.046805   29367 main.go:141] libmachine: (ha-322980-m02)       <source file='/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m02/boot2docker.iso'/>
	I0505 21:16:27.046813   29367 main.go:141] libmachine: (ha-322980-m02)       <target dev='hdc' bus='scsi'/>
	I0505 21:16:27.046820   29367 main.go:141] libmachine: (ha-322980-m02)       <readonly/>
	I0505 21:16:27.046826   29367 main.go:141] libmachine: (ha-322980-m02)     </disk>
	I0505 21:16:27.046833   29367 main.go:141] libmachine: (ha-322980-m02)     <disk type='file' device='disk'>
	I0505 21:16:27.046843   29367 main.go:141] libmachine: (ha-322980-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0505 21:16:27.046871   29367 main.go:141] libmachine: (ha-322980-m02)       <source file='/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m02/ha-322980-m02.rawdisk'/>
	I0505 21:16:27.046892   29367 main.go:141] libmachine: (ha-322980-m02)       <target dev='hda' bus='virtio'/>
	I0505 21:16:27.046900   29367 main.go:141] libmachine: (ha-322980-m02)     </disk>
	I0505 21:16:27.046904   29367 main.go:141] libmachine: (ha-322980-m02)     <interface type='network'>
	I0505 21:16:27.046910   29367 main.go:141] libmachine: (ha-322980-m02)       <source network='mk-ha-322980'/>
	I0505 21:16:27.046929   29367 main.go:141] libmachine: (ha-322980-m02)       <model type='virtio'/>
	I0505 21:16:27.046938   29367 main.go:141] libmachine: (ha-322980-m02)     </interface>
	I0505 21:16:27.046943   29367 main.go:141] libmachine: (ha-322980-m02)     <interface type='network'>
	I0505 21:16:27.046949   29367 main.go:141] libmachine: (ha-322980-m02)       <source network='default'/>
	I0505 21:16:27.046956   29367 main.go:141] libmachine: (ha-322980-m02)       <model type='virtio'/>
	I0505 21:16:27.046962   29367 main.go:141] libmachine: (ha-322980-m02)     </interface>
	I0505 21:16:27.046967   29367 main.go:141] libmachine: (ha-322980-m02)     <serial type='pty'>
	I0505 21:16:27.046973   29367 main.go:141] libmachine: (ha-322980-m02)       <target port='0'/>
	I0505 21:16:27.046980   29367 main.go:141] libmachine: (ha-322980-m02)     </serial>
	I0505 21:16:27.046986   29367 main.go:141] libmachine: (ha-322980-m02)     <console type='pty'>
	I0505 21:16:27.046995   29367 main.go:141] libmachine: (ha-322980-m02)       <target type='serial' port='0'/>
	I0505 21:16:27.047023   29367 main.go:141] libmachine: (ha-322980-m02)     </console>
	I0505 21:16:27.047046   29367 main.go:141] libmachine: (ha-322980-m02)     <rng model='virtio'>
	I0505 21:16:27.047061   29367 main.go:141] libmachine: (ha-322980-m02)       <backend model='random'>/dev/random</backend>
	I0505 21:16:27.047070   29367 main.go:141] libmachine: (ha-322980-m02)     </rng>
	I0505 21:16:27.047078   29367 main.go:141] libmachine: (ha-322980-m02)     
	I0505 21:16:27.047088   29367 main.go:141] libmachine: (ha-322980-m02)     
	I0505 21:16:27.047100   29367 main.go:141] libmachine: (ha-322980-m02)   </devices>
	I0505 21:16:27.047114   29367 main.go:141] libmachine: (ha-322980-m02) </domain>
	I0505 21:16:27.047137   29367 main.go:141] libmachine: (ha-322980-m02) 
	I0505 21:16:27.053474   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:a2:9a:5e in network default
	I0505 21:16:27.054066   29367 main.go:141] libmachine: (ha-322980-m02) Ensuring networks are active...
	I0505 21:16:27.054089   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:27.054781   29367 main.go:141] libmachine: (ha-322980-m02) Ensuring network default is active
	I0505 21:16:27.055053   29367 main.go:141] libmachine: (ha-322980-m02) Ensuring network mk-ha-322980 is active
	I0505 21:16:27.055373   29367 main.go:141] libmachine: (ha-322980-m02) Getting domain xml...
	I0505 21:16:27.056030   29367 main.go:141] libmachine: (ha-322980-m02) Creating domain...
	I0505 21:16:28.264297   29367 main.go:141] libmachine: (ha-322980-m02) Waiting to get IP...
	I0505 21:16:28.265277   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:28.265768   29367 main.go:141] libmachine: (ha-322980-m02) DBG | unable to find current IP address of domain ha-322980-m02 in network mk-ha-322980
	I0505 21:16:28.265812   29367 main.go:141] libmachine: (ha-322980-m02) DBG | I0505 21:16:28.265757   29784 retry.go:31] will retry after 218.278648ms: waiting for machine to come up
	I0505 21:16:28.485333   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:28.485945   29367 main.go:141] libmachine: (ha-322980-m02) DBG | unable to find current IP address of domain ha-322980-m02 in network mk-ha-322980
	I0505 21:16:28.485972   29367 main.go:141] libmachine: (ha-322980-m02) DBG | I0505 21:16:28.485893   29784 retry.go:31] will retry after 357.838703ms: waiting for machine to come up
	I0505 21:16:28.845674   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:28.846151   29367 main.go:141] libmachine: (ha-322980-m02) DBG | unable to find current IP address of domain ha-322980-m02 in network mk-ha-322980
	I0505 21:16:28.846181   29367 main.go:141] libmachine: (ha-322980-m02) DBG | I0505 21:16:28.846100   29784 retry.go:31] will retry after 443.483557ms: waiting for machine to come up
	I0505 21:16:29.293044   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:29.293529   29367 main.go:141] libmachine: (ha-322980-m02) DBG | unable to find current IP address of domain ha-322980-m02 in network mk-ha-322980
	I0505 21:16:29.293553   29367 main.go:141] libmachine: (ha-322980-m02) DBG | I0505 21:16:29.293488   29784 retry.go:31] will retry after 526.787702ms: waiting for machine to come up
	I0505 21:16:29.822198   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:29.822556   29367 main.go:141] libmachine: (ha-322980-m02) DBG | unable to find current IP address of domain ha-322980-m02 in network mk-ha-322980
	I0505 21:16:29.822595   29367 main.go:141] libmachine: (ha-322980-m02) DBG | I0505 21:16:29.822513   29784 retry.go:31] will retry after 458.871695ms: waiting for machine to come up
	I0505 21:16:30.283446   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:30.283853   29367 main.go:141] libmachine: (ha-322980-m02) DBG | unable to find current IP address of domain ha-322980-m02 in network mk-ha-322980
	I0505 21:16:30.283873   29367 main.go:141] libmachine: (ha-322980-m02) DBG | I0505 21:16:30.283823   29784 retry.go:31] will retry after 611.219423ms: waiting for machine to come up
	I0505 21:16:30.896969   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:30.897428   29367 main.go:141] libmachine: (ha-322980-m02) DBG | unable to find current IP address of domain ha-322980-m02 in network mk-ha-322980
	I0505 21:16:30.897458   29367 main.go:141] libmachine: (ha-322980-m02) DBG | I0505 21:16:30.897368   29784 retry.go:31] will retry after 1.100483339s: waiting for machine to come up
	I0505 21:16:31.999907   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:32.000354   29367 main.go:141] libmachine: (ha-322980-m02) DBG | unable to find current IP address of domain ha-322980-m02 in network mk-ha-322980
	I0505 21:16:32.000391   29367 main.go:141] libmachine: (ha-322980-m02) DBG | I0505 21:16:32.000332   29784 retry.go:31] will retry after 1.25923991s: waiting for machine to come up
	I0505 21:16:33.261662   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:33.262111   29367 main.go:141] libmachine: (ha-322980-m02) DBG | unable to find current IP address of domain ha-322980-m02 in network mk-ha-322980
	I0505 21:16:33.262139   29367 main.go:141] libmachine: (ha-322980-m02) DBG | I0505 21:16:33.262046   29784 retry.go:31] will retry after 1.398082567s: waiting for machine to come up
	I0505 21:16:34.662648   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:34.663130   29367 main.go:141] libmachine: (ha-322980-m02) DBG | unable to find current IP address of domain ha-322980-m02 in network mk-ha-322980
	I0505 21:16:34.663157   29367 main.go:141] libmachine: (ha-322980-m02) DBG | I0505 21:16:34.663082   29784 retry.go:31] will retry after 2.195675763s: waiting for machine to come up
	I0505 21:16:36.860415   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:36.860874   29367 main.go:141] libmachine: (ha-322980-m02) DBG | unable to find current IP address of domain ha-322980-m02 in network mk-ha-322980
	I0505 21:16:36.860904   29367 main.go:141] libmachine: (ha-322980-m02) DBG | I0505 21:16:36.860816   29784 retry.go:31] will retry after 2.407725991s: waiting for machine to come up
	I0505 21:16:39.269961   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:39.270455   29367 main.go:141] libmachine: (ha-322980-m02) DBG | unable to find current IP address of domain ha-322980-m02 in network mk-ha-322980
	I0505 21:16:39.270488   29367 main.go:141] libmachine: (ha-322980-m02) DBG | I0505 21:16:39.270370   29784 retry.go:31] will retry after 2.806944631s: waiting for machine to come up
	I0505 21:16:42.079610   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:42.079993   29367 main.go:141] libmachine: (ha-322980-m02) DBG | unable to find current IP address of domain ha-322980-m02 in network mk-ha-322980
	I0505 21:16:42.080019   29367 main.go:141] libmachine: (ha-322980-m02) DBG | I0505 21:16:42.079955   29784 retry.go:31] will retry after 3.727124624s: waiting for machine to come up
	I0505 21:16:45.812094   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:45.812553   29367 main.go:141] libmachine: (ha-322980-m02) DBG | unable to find current IP address of domain ha-322980-m02 in network mk-ha-322980
	I0505 21:16:45.812580   29367 main.go:141] libmachine: (ha-322980-m02) DBG | I0505 21:16:45.812502   29784 retry.go:31] will retry after 5.548395809s: waiting for machine to come up
	I0505 21:16:51.364646   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:51.365085   29367 main.go:141] libmachine: (ha-322980-m02) Found IP for machine: 192.168.39.228
	I0505 21:16:51.365105   29367 main.go:141] libmachine: (ha-322980-m02) Reserving static IP address...
	I0505 21:16:51.365115   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has current primary IP address 192.168.39.228 and MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:51.365563   29367 main.go:141] libmachine: (ha-322980-m02) DBG | unable to find host DHCP lease matching {name: "ha-322980-m02", mac: "52:54:00:91:59:b4", ip: "192.168.39.228"} in network mk-ha-322980
	I0505 21:16:51.435239   29367 main.go:141] libmachine: (ha-322980-m02) DBG | Getting to WaitForSSH function...
	I0505 21:16:51.435274   29367 main.go:141] libmachine: (ha-322980-m02) Reserved static IP address: 192.168.39.228
	I0505 21:16:51.435287   29367 main.go:141] libmachine: (ha-322980-m02) Waiting for SSH to be available...
	I0505 21:16:51.437836   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:51.438330   29367 main.go:141] libmachine: (ha-322980-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:59:b4", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:16:42 +0000 UTC Type:0 Mac:52:54:00:91:59:b4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:minikube Clientid:01:52:54:00:91:59:b4}
	I0505 21:16:51.438351   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:51.438466   29367 main.go:141] libmachine: (ha-322980-m02) DBG | Using SSH client type: external
	I0505 21:16:51.438491   29367 main.go:141] libmachine: (ha-322980-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m02/id_rsa (-rw-------)
	I0505 21:16:51.438564   29367 main.go:141] libmachine: (ha-322980-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.228 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0505 21:16:51.438598   29367 main.go:141] libmachine: (ha-322980-m02) DBG | About to run SSH command:
	I0505 21:16:51.438618   29367 main.go:141] libmachine: (ha-322980-m02) DBG | exit 0
	I0505 21:16:51.567511   29367 main.go:141] libmachine: (ha-322980-m02) DBG | SSH cmd err, output: <nil>: 
	I0505 21:16:51.567784   29367 main.go:141] libmachine: (ha-322980-m02) KVM machine creation complete!
	I0505 21:16:51.568084   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetConfigRaw
	I0505 21:16:51.568642   29367 main.go:141] libmachine: (ha-322980-m02) Calling .DriverName
	I0505 21:16:51.568841   29367 main.go:141] libmachine: (ha-322980-m02) Calling .DriverName
	I0505 21:16:51.569057   29367 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0505 21:16:51.569078   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetState
	I0505 21:16:51.570245   29367 main.go:141] libmachine: Detecting operating system of created instance...
	I0505 21:16:51.570261   29367 main.go:141] libmachine: Waiting for SSH to be available...
	I0505 21:16:51.570268   29367 main.go:141] libmachine: Getting to WaitForSSH function...
	I0505 21:16:51.570276   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHHostname
	I0505 21:16:51.572647   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:51.573050   29367 main.go:141] libmachine: (ha-322980-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:59:b4", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:16:42 +0000 UTC Type:0 Mac:52:54:00:91:59:b4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-322980-m02 Clientid:01:52:54:00:91:59:b4}
	I0505 21:16:51.573078   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:51.573239   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHPort
	I0505 21:16:51.573429   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHKeyPath
	I0505 21:16:51.573554   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHKeyPath
	I0505 21:16:51.573703   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHUsername
	I0505 21:16:51.573897   29367 main.go:141] libmachine: Using SSH client type: native
	I0505 21:16:51.574127   29367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0505 21:16:51.574144   29367 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0505 21:16:51.683516   29367 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0505 21:16:51.683541   29367 main.go:141] libmachine: Detecting the provisioner...
	I0505 21:16:51.683551   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHHostname
	I0505 21:16:51.686290   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:51.686643   29367 main.go:141] libmachine: (ha-322980-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:59:b4", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:16:42 +0000 UTC Type:0 Mac:52:54:00:91:59:b4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-322980-m02 Clientid:01:52:54:00:91:59:b4}
	I0505 21:16:51.686683   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:51.686821   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHPort
	I0505 21:16:51.687014   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHKeyPath
	I0505 21:16:51.687163   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHKeyPath
	I0505 21:16:51.687301   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHUsername
	I0505 21:16:51.687439   29367 main.go:141] libmachine: Using SSH client type: native
	I0505 21:16:51.687619   29367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0505 21:16:51.687631   29367 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0505 21:16:51.796608   29367 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0505 21:16:51.796686   29367 main.go:141] libmachine: found compatible host: buildroot
	I0505 21:16:51.796701   29367 main.go:141] libmachine: Provisioning with buildroot...
	I0505 21:16:51.796712   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetMachineName
	I0505 21:16:51.796966   29367 buildroot.go:166] provisioning hostname "ha-322980-m02"
	I0505 21:16:51.796991   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetMachineName
	I0505 21:16:51.797188   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHHostname
	I0505 21:16:51.799655   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:51.800009   29367 main.go:141] libmachine: (ha-322980-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:59:b4", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:16:42 +0000 UTC Type:0 Mac:52:54:00:91:59:b4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-322980-m02 Clientid:01:52:54:00:91:59:b4}
	I0505 21:16:51.800052   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:51.800195   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHPort
	I0505 21:16:51.800373   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHKeyPath
	I0505 21:16:51.800545   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHKeyPath
	I0505 21:16:51.800687   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHUsername
	I0505 21:16:51.800857   29367 main.go:141] libmachine: Using SSH client type: native
	I0505 21:16:51.801031   29367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0505 21:16:51.801045   29367 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-322980-m02 && echo "ha-322980-m02" | sudo tee /etc/hostname
	I0505 21:16:51.925690   29367 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-322980-m02
	
	I0505 21:16:51.925718   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHHostname
	I0505 21:16:51.928452   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:51.928818   29367 main.go:141] libmachine: (ha-322980-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:59:b4", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:16:42 +0000 UTC Type:0 Mac:52:54:00:91:59:b4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-322980-m02 Clientid:01:52:54:00:91:59:b4}
	I0505 21:16:51.928847   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:51.929034   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHPort
	I0505 21:16:51.929240   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHKeyPath
	I0505 21:16:51.929418   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHKeyPath
	I0505 21:16:51.929596   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHUsername
	I0505 21:16:51.929764   29367 main.go:141] libmachine: Using SSH client type: native
	I0505 21:16:51.929957   29367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0505 21:16:51.929981   29367 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-322980-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-322980-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-322980-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0505 21:16:52.050564   29367 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0505 21:16:52.050592   29367 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18602-11466/.minikube CaCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18602-11466/.minikube}
	I0505 21:16:52.050623   29367 buildroot.go:174] setting up certificates
	I0505 21:16:52.050635   29367 provision.go:84] configureAuth start
	I0505 21:16:52.050664   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetMachineName
	I0505 21:16:52.050929   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetIP
	I0505 21:16:52.053658   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:52.053995   29367 main.go:141] libmachine: (ha-322980-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:59:b4", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:16:42 +0000 UTC Type:0 Mac:52:54:00:91:59:b4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-322980-m02 Clientid:01:52:54:00:91:59:b4}
	I0505 21:16:52.054022   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:52.054179   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHHostname
	I0505 21:16:52.056345   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:52.056742   29367 main.go:141] libmachine: (ha-322980-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:59:b4", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:16:42 +0000 UTC Type:0 Mac:52:54:00:91:59:b4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-322980-m02 Clientid:01:52:54:00:91:59:b4}
	I0505 21:16:52.056785   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:52.056933   29367 provision.go:143] copyHostCerts
	I0505 21:16:52.056963   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem
	I0505 21:16:52.057002   29367 exec_runner.go:144] found /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem, removing ...
	I0505 21:16:52.057015   29367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem
	I0505 21:16:52.057124   29367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem (1078 bytes)
	I0505 21:16:52.057244   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem
	I0505 21:16:52.057279   29367 exec_runner.go:144] found /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem, removing ...
	I0505 21:16:52.057291   29367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem
	I0505 21:16:52.057333   29367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem (1123 bytes)
	I0505 21:16:52.057423   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem
	I0505 21:16:52.057452   29367 exec_runner.go:144] found /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem, removing ...
	I0505 21:16:52.057460   29367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem
	I0505 21:16:52.057495   29367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem (1675 bytes)
	I0505 21:16:52.057591   29367 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem org=jenkins.ha-322980-m02 san=[127.0.0.1 192.168.39.228 ha-322980-m02 localhost minikube]
	I0505 21:16:52.379058   29367 provision.go:177] copyRemoteCerts
	I0505 21:16:52.379126   29367 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0505 21:16:52.379157   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHHostname
	I0505 21:16:52.381743   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:52.382033   29367 main.go:141] libmachine: (ha-322980-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:59:b4", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:16:42 +0000 UTC Type:0 Mac:52:54:00:91:59:b4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-322980-m02 Clientid:01:52:54:00:91:59:b4}
	I0505 21:16:52.382055   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:52.382240   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHPort
	I0505 21:16:52.382430   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHKeyPath
	I0505 21:16:52.382567   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHUsername
	I0505 21:16:52.382695   29367 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m02/id_rsa Username:docker}
	I0505 21:16:52.467046   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0505 21:16:52.467173   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0505 21:16:52.495979   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0505 21:16:52.496050   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0505 21:16:52.521847   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0505 21:16:52.521908   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0505 21:16:52.548671   29367 provision.go:87] duration metric: took 498.021001ms to configureAuth
	I0505 21:16:52.548705   29367 buildroot.go:189] setting minikube options for container-runtime
	I0505 21:16:52.548932   29367 config.go:182] Loaded profile config "ha-322980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 21:16:52.549017   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHHostname
	I0505 21:16:52.551653   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:52.552024   29367 main.go:141] libmachine: (ha-322980-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:59:b4", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:16:42 +0000 UTC Type:0 Mac:52:54:00:91:59:b4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-322980-m02 Clientid:01:52:54:00:91:59:b4}
	I0505 21:16:52.552052   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:52.552252   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHPort
	I0505 21:16:52.552447   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHKeyPath
	I0505 21:16:52.552591   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHKeyPath
	I0505 21:16:52.552711   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHUsername
	I0505 21:16:52.552940   29367 main.go:141] libmachine: Using SSH client type: native
	I0505 21:16:52.553095   29367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0505 21:16:52.553115   29367 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0505 21:16:52.834425   29367 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0505 21:16:52.834461   29367 main.go:141] libmachine: Checking connection to Docker...
	I0505 21:16:52.834473   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetURL
	I0505 21:16:52.835752   29367 main.go:141] libmachine: (ha-322980-m02) DBG | Using libvirt version 6000000
	I0505 21:16:52.838267   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:52.838630   29367 main.go:141] libmachine: (ha-322980-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:59:b4", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:16:42 +0000 UTC Type:0 Mac:52:54:00:91:59:b4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-322980-m02 Clientid:01:52:54:00:91:59:b4}
	I0505 21:16:52.838661   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:52.838814   29367 main.go:141] libmachine: Docker is up and running!
	I0505 21:16:52.838831   29367 main.go:141] libmachine: Reticulating splines...
	I0505 21:16:52.838838   29367 client.go:171] duration metric: took 26.167624154s to LocalClient.Create
	I0505 21:16:52.838862   29367 start.go:167] duration metric: took 26.167693485s to libmachine.API.Create "ha-322980"
	I0505 21:16:52.838878   29367 start.go:293] postStartSetup for "ha-322980-m02" (driver="kvm2")
	I0505 21:16:52.838891   29367 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0505 21:16:52.838922   29367 main.go:141] libmachine: (ha-322980-m02) Calling .DriverName
	I0505 21:16:52.839161   29367 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0505 21:16:52.839190   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHHostname
	I0505 21:16:52.841234   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:52.841492   29367 main.go:141] libmachine: (ha-322980-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:59:b4", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:16:42 +0000 UTC Type:0 Mac:52:54:00:91:59:b4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-322980-m02 Clientid:01:52:54:00:91:59:b4}
	I0505 21:16:52.841524   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:52.841633   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHPort
	I0505 21:16:52.841818   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHKeyPath
	I0505 21:16:52.842002   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHUsername
	I0505 21:16:52.842139   29367 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m02/id_rsa Username:docker}
	I0505 21:16:52.929492   29367 ssh_runner.go:195] Run: cat /etc/os-release
	I0505 21:16:52.934730   29367 info.go:137] Remote host: Buildroot 2023.02.9
	I0505 21:16:52.934753   29367 filesync.go:126] Scanning /home/jenkins/minikube-integration/18602-11466/.minikube/addons for local assets ...
	I0505 21:16:52.934827   29367 filesync.go:126] Scanning /home/jenkins/minikube-integration/18602-11466/.minikube/files for local assets ...
	I0505 21:16:52.934909   29367 filesync.go:149] local asset: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem -> 187982.pem in /etc/ssl/certs
	I0505 21:16:52.934921   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem -> /etc/ssl/certs/187982.pem
	I0505 21:16:52.935015   29367 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0505 21:16:52.947700   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem --> /etc/ssl/certs/187982.pem (1708 bytes)
	I0505 21:16:52.975618   29367 start.go:296] duration metric: took 136.725548ms for postStartSetup
	I0505 21:16:52.975750   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetConfigRaw
	I0505 21:16:52.976327   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetIP
	I0505 21:16:52.979170   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:52.979558   29367 main.go:141] libmachine: (ha-322980-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:59:b4", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:16:42 +0000 UTC Type:0 Mac:52:54:00:91:59:b4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-322980-m02 Clientid:01:52:54:00:91:59:b4}
	I0505 21:16:52.979588   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:52.979776   29367 profile.go:143] Saving config to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/config.json ...
	I0505 21:16:52.979943   29367 start.go:128] duration metric: took 26.327239423s to createHost
	I0505 21:16:52.979963   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHHostname
	I0505 21:16:52.982126   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:52.982548   29367 main.go:141] libmachine: (ha-322980-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:59:b4", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:16:42 +0000 UTC Type:0 Mac:52:54:00:91:59:b4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-322980-m02 Clientid:01:52:54:00:91:59:b4}
	I0505 21:16:52.982584   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:52.982731   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHPort
	I0505 21:16:52.982921   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHKeyPath
	I0505 21:16:52.983066   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHKeyPath
	I0505 21:16:52.983211   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHUsername
	I0505 21:16:52.983418   29367 main.go:141] libmachine: Using SSH client type: native
	I0505 21:16:52.983623   29367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0505 21:16:52.983637   29367 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0505 21:16:53.092793   29367 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714943813.078239933
	
	I0505 21:16:53.092814   29367 fix.go:216] guest clock: 1714943813.078239933
	I0505 21:16:53.092825   29367 fix.go:229] Guest: 2024-05-05 21:16:53.078239933 +0000 UTC Remote: 2024-05-05 21:16:52.979953804 +0000 UTC m=+84.842713381 (delta=98.286129ms)
	I0505 21:16:53.092843   29367 fix.go:200] guest clock delta is within tolerance: 98.286129ms
	I0505 21:16:53.092849   29367 start.go:83] releasing machines lock for "ha-322980-m02", held for 26.44027621s
	I0505 21:16:53.092873   29367 main.go:141] libmachine: (ha-322980-m02) Calling .DriverName
	I0505 21:16:53.093108   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetIP
	I0505 21:16:53.095332   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:53.095797   29367 main.go:141] libmachine: (ha-322980-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:59:b4", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:16:42 +0000 UTC Type:0 Mac:52:54:00:91:59:b4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-322980-m02 Clientid:01:52:54:00:91:59:b4}
	I0505 21:16:53.095828   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:53.098033   29367 out.go:177] * Found network options:
	I0505 21:16:53.099371   29367 out.go:177]   - NO_PROXY=192.168.39.178
	W0505 21:16:53.100556   29367 proxy.go:119] fail to check proxy env: Error ip not in block
	I0505 21:16:53.100592   29367 main.go:141] libmachine: (ha-322980-m02) Calling .DriverName
	I0505 21:16:53.101074   29367 main.go:141] libmachine: (ha-322980-m02) Calling .DriverName
	I0505 21:16:53.101287   29367 main.go:141] libmachine: (ha-322980-m02) Calling .DriverName
	I0505 21:16:53.101369   29367 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0505 21:16:53.101410   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHHostname
	W0505 21:16:53.101489   29367 proxy.go:119] fail to check proxy env: Error ip not in block
	I0505 21:16:53.101560   29367 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0505 21:16:53.101582   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHHostname
	I0505 21:16:53.103970   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:53.104306   29367 main.go:141] libmachine: (ha-322980-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:59:b4", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:16:42 +0000 UTC Type:0 Mac:52:54:00:91:59:b4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-322980-m02 Clientid:01:52:54:00:91:59:b4}
	I0505 21:16:53.104334   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:53.104444   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:53.104513   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHPort
	I0505 21:16:53.104753   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHKeyPath
	I0505 21:16:53.104898   29367 main.go:141] libmachine: (ha-322980-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:59:b4", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:16:42 +0000 UTC Type:0 Mac:52:54:00:91:59:b4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-322980-m02 Clientid:01:52:54:00:91:59:b4}
	I0505 21:16:53.104917   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:53.104932   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHUsername
	I0505 21:16:53.105077   29367 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m02/id_rsa Username:docker}
	I0505 21:16:53.105142   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHPort
	I0505 21:16:53.105295   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHKeyPath
	I0505 21:16:53.105530   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHUsername
	I0505 21:16:53.105702   29367 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m02/id_rsa Username:docker}
	I0505 21:16:53.350389   29367 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0505 21:16:53.357679   29367 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0505 21:16:53.357743   29367 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0505 21:16:53.374942   29367 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0505 21:16:53.374965   29367 start.go:494] detecting cgroup driver to use...
	I0505 21:16:53.375033   29367 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0505 21:16:53.392470   29367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0505 21:16:53.406913   29367 docker.go:217] disabling cri-docker service (if available) ...
	I0505 21:16:53.406967   29367 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0505 21:16:53.420841   29367 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0505 21:16:53.434674   29367 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0505 21:16:53.556020   29367 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0505 21:16:53.698587   29367 docker.go:233] disabling docker service ...
	I0505 21:16:53.698651   29367 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0505 21:16:53.716510   29367 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0505 21:16:53.731576   29367 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0505 21:16:53.877152   29367 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0505 21:16:53.991713   29367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0505 21:16:54.007884   29367 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0505 21:16:54.029276   29367 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0505 21:16:54.029330   29367 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:16:54.041610   29367 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0505 21:16:54.041671   29367 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:16:54.053411   29367 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:16:54.064311   29367 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:16:54.075235   29367 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0505 21:16:54.086120   29367 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:16:54.098550   29367 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:16:54.117350   29367 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:16:54.128050   29367 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0505 21:16:54.137866   29367 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0505 21:16:54.137913   29367 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0505 21:16:54.152227   29367 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0505 21:16:54.162712   29367 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 21:16:54.280446   29367 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0505 21:16:54.435248   29367 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0505 21:16:54.435317   29367 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0505 21:16:54.442224   29367 start.go:562] Will wait 60s for crictl version
	I0505 21:16:54.442286   29367 ssh_runner.go:195] Run: which crictl
	I0505 21:16:54.446568   29367 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0505 21:16:54.486604   29367 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0505 21:16:54.486669   29367 ssh_runner.go:195] Run: crio --version
	I0505 21:16:54.521653   29367 ssh_runner.go:195] Run: crio --version
	I0505 21:16:54.557850   29367 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0505 21:16:54.559337   29367 out.go:177]   - env NO_PROXY=192.168.39.178
	I0505 21:16:54.560303   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetIP
	I0505 21:16:54.562636   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:54.562931   29367 main.go:141] libmachine: (ha-322980-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:59:b4", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:16:42 +0000 UTC Type:0 Mac:52:54:00:91:59:b4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-322980-m02 Clientid:01:52:54:00:91:59:b4}
	I0505 21:16:54.562958   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:54.563214   29367 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0505 21:16:54.567662   29367 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0505 21:16:54.581456   29367 mustload.go:65] Loading cluster: ha-322980
	I0505 21:16:54.581648   29367 config.go:182] Loaded profile config "ha-322980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 21:16:54.582020   29367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:16:54.582062   29367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:16:54.596154   29367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36301
	I0505 21:16:54.596542   29367 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:16:54.596986   29367 main.go:141] libmachine: Using API Version  1
	I0505 21:16:54.597013   29367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:16:54.597342   29367 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:16:54.597559   29367 main.go:141] libmachine: (ha-322980) Calling .GetState
	I0505 21:16:54.598966   29367 host.go:66] Checking if "ha-322980" exists ...
	I0505 21:16:54.599233   29367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:16:54.599256   29367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:16:54.613190   29367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41921
	I0505 21:16:54.613605   29367 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:16:54.614051   29367 main.go:141] libmachine: Using API Version  1
	I0505 21:16:54.614072   29367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:16:54.614317   29367 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:16:54.614500   29367 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:16:54.614659   29367 certs.go:68] Setting up /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980 for IP: 192.168.39.228
	I0505 21:16:54.614672   29367 certs.go:194] generating shared ca certs ...
	I0505 21:16:54.614684   29367 certs.go:226] acquiring lock for ca certs: {Name:mk9a8f97724855697074b08194c6247f8f9e5c84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 21:16:54.614823   29367 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key
	I0505 21:16:54.614870   29367 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key
	I0505 21:16:54.614880   29367 certs.go:256] generating profile certs ...
	I0505 21:16:54.614948   29367 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/client.key
	I0505 21:16:54.614972   29367 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key.e646210b
	I0505 21:16:54.614986   29367 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt.e646210b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.178 192.168.39.228 192.168.39.254]
	I0505 21:16:54.759126   29367 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt.e646210b ...
	I0505 21:16:54.759153   29367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt.e646210b: {Name:mkcf6f675dbe6e4e6e920993380cde57d475599a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 21:16:54.759333   29367 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key.e646210b ...
	I0505 21:16:54.759349   29367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key.e646210b: {Name:mk0f3cf878fab5fa33854f97974df366519b30cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 21:16:54.759450   29367 certs.go:381] copying /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt.e646210b -> /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt
	I0505 21:16:54.759608   29367 certs.go:385] copying /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key.e646210b -> /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key
	I0505 21:16:54.759729   29367 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.key
	I0505 21:16:54.759746   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0505 21:16:54.759758   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0505 21:16:54.759770   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0505 21:16:54.759783   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0505 21:16:54.759795   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0505 21:16:54.759807   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0505 21:16:54.759818   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0505 21:16:54.759830   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0505 21:16:54.759871   29367 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798.pem (1338 bytes)
	W0505 21:16:54.759899   29367 certs.go:480] ignoring /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798_empty.pem, impossibly tiny 0 bytes
	I0505 21:16:54.759908   29367 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem (1675 bytes)
	I0505 21:16:54.759927   29367 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem (1078 bytes)
	I0505 21:16:54.759950   29367 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem (1123 bytes)
	I0505 21:16:54.759970   29367 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem (1675 bytes)
	I0505 21:16:54.760006   29367 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem (1708 bytes)
	I0505 21:16:54.760033   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0505 21:16:54.760046   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798.pem -> /usr/share/ca-certificates/18798.pem
	I0505 21:16:54.760059   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem -> /usr/share/ca-certificates/187982.pem
	I0505 21:16:54.760088   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:16:54.763448   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:16:54.763917   29367 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:16:54.763950   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:16:54.764100   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:16:54.764285   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:16:54.764463   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:16:54.764612   29367 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa Username:docker}
	I0505 21:16:54.839997   29367 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0505 21:16:54.845950   29367 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0505 21:16:54.858185   29367 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0505 21:16:54.862906   29367 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0505 21:16:54.873652   29367 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0505 21:16:54.878184   29367 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0505 21:16:54.888814   29367 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0505 21:16:54.893566   29367 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0505 21:16:54.904232   29367 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0505 21:16:54.908518   29367 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0505 21:16:54.918924   29367 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0505 21:16:54.923157   29367 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0505 21:16:54.935092   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0505 21:16:54.964109   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0505 21:16:54.990070   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0505 21:16:55.017912   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0505 21:16:55.044937   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0505 21:16:55.070973   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0505 21:16:55.097019   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0505 21:16:55.123633   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0505 21:16:55.149575   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0505 21:16:55.175506   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798.pem --> /usr/share/ca-certificates/18798.pem (1338 bytes)
	I0505 21:16:55.205977   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem --> /usr/share/ca-certificates/187982.pem (1708 bytes)
	I0505 21:16:55.235625   29367 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0505 21:16:55.254123   29367 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0505 21:16:55.272371   29367 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0505 21:16:55.290906   29367 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0505 21:16:55.309078   29367 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0505 21:16:55.327422   29367 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0505 21:16:55.344772   29367 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0505 21:16:55.363547   29367 ssh_runner.go:195] Run: openssl version
	I0505 21:16:55.369907   29367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/187982.pem && ln -fs /usr/share/ca-certificates/187982.pem /etc/ssl/certs/187982.pem"
	I0505 21:16:55.381792   29367 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/187982.pem
	I0505 21:16:55.387285   29367 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  5 21:11 /usr/share/ca-certificates/187982.pem
	I0505 21:16:55.387341   29367 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/187982.pem
	I0505 21:16:55.394098   29367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/187982.pem /etc/ssl/certs/3ec20f2e.0"
	I0505 21:16:55.406273   29367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0505 21:16:55.419321   29367 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0505 21:16:55.424714   29367 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  5 20:58 /usr/share/ca-certificates/minikubeCA.pem
	I0505 21:16:55.424778   29367 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0505 21:16:55.431513   29367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0505 21:16:55.443798   29367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18798.pem && ln -fs /usr/share/ca-certificates/18798.pem /etc/ssl/certs/18798.pem"
	I0505 21:16:55.455987   29367 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18798.pem
	I0505 21:16:55.461266   29367 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  5 21:11 /usr/share/ca-certificates/18798.pem
	I0505 21:16:55.461325   29367 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18798.pem
	I0505 21:16:55.467703   29367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18798.pem /etc/ssl/certs/51391683.0"
	I0505 21:16:55.479880   29367 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0505 21:16:55.484693   29367 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0505 21:16:55.484760   29367 kubeadm.go:928] updating node {m02 192.168.39.228 8443 v1.30.0 crio true true} ...
	I0505 21:16:55.484839   29367 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-322980-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.228
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-322980 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0505 21:16:55.484868   29367 kube-vip.go:111] generating kube-vip config ...
	I0505 21:16:55.484902   29367 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0505 21:16:55.502712   29367 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0505 21:16:55.502793   29367 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0505 21:16:55.502840   29367 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0505 21:16:55.515224   29367 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0505 21:16:55.515301   29367 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0505 21:16:55.526926   29367 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0505 21:16:55.526958   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/cache/linux/amd64/v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0505 21:16:55.527026   29367 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18602-11466/.minikube/cache/linux/amd64/v1.30.0/kubelet
	I0505 21:16:55.527061   29367 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18602-11466/.minikube/cache/linux/amd64/v1.30.0/kubeadm
	I0505 21:16:55.527039   29367 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0505 21:16:55.533058   29367 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0505 21:16:55.533090   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/cache/linux/amd64/v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0505 21:17:17.719707   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/cache/linux/amd64/v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0505 21:17:17.719778   29367 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0505 21:17:17.726396   29367 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0505 21:17:17.726428   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/cache/linux/amd64/v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0505 21:17:50.020951   29367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 21:17:50.038891   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/cache/linux/amd64/v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0505 21:17:50.038981   29367 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0505 21:17:50.044160   29367 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0505 21:17:50.044190   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/cache/linux/amd64/v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0505 21:17:50.504506   29367 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0505 21:17:50.515043   29367 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0505 21:17:50.534875   29367 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0505 21:17:50.556682   29367 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0505 21:17:50.577948   29367 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0505 21:17:50.582568   29367 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0505 21:17:50.596820   29367 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 21:17:50.755906   29367 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0505 21:17:50.779139   29367 host.go:66] Checking if "ha-322980" exists ...
	I0505 21:17:50.779615   29367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:17:50.779659   29367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:17:50.795054   29367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37309
	I0505 21:17:50.795670   29367 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:17:50.796336   29367 main.go:141] libmachine: Using API Version  1
	I0505 21:17:50.796369   29367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:17:50.796697   29367 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:17:50.796913   29367 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:17:50.797113   29367 start.go:316] joinCluster: &{Name:ha-322980 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cluster
Name:ha-322980 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.178 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 21:17:50.797222   29367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0505 21:17:50.797244   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:17:50.800376   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:17:50.800800   29367 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:17:50.800824   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:17:50.801026   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:17:50.801179   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:17:50.801321   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:17:50.801444   29367 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa Username:docker}
	I0505 21:17:50.981303   29367 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0505 21:17:50.981352   29367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4gn4z0.x12krlpmiirjw5ha --discovery-token-ca-cert-hash sha256:6a26f18bad1f4f0bdeab00e50a185d34e4c63698b4d623f2ccf5d34207e02541 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-322980-m02 --control-plane --apiserver-advertise-address=192.168.39.228 --apiserver-bind-port=8443"
	I0505 21:18:15.158467   29367 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4gn4z0.x12krlpmiirjw5ha --discovery-token-ca-cert-hash sha256:6a26f18bad1f4f0bdeab00e50a185d34e4c63698b4d623f2ccf5d34207e02541 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-322980-m02 --control-plane --apiserver-advertise-address=192.168.39.228 --apiserver-bind-port=8443": (24.177086804s)
	I0505 21:18:15.158504   29367 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0505 21:18:15.748681   29367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-322980-m02 minikube.k8s.io/updated_at=2024_05_05T21_18_15_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=182cbbc99574885c654f8e32902368a71f76ddd3 minikube.k8s.io/name=ha-322980 minikube.k8s.io/primary=false
	I0505 21:18:15.913052   29367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-322980-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0505 21:18:16.054538   29367 start.go:318] duration metric: took 25.257420448s to joinCluster
	I0505 21:18:16.054611   29367 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0505 21:18:16.056263   29367 out.go:177] * Verifying Kubernetes components...
	I0505 21:18:16.054924   29367 config.go:182] Loaded profile config "ha-322980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 21:18:16.057883   29367 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 21:18:16.308454   29367 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0505 21:18:16.337902   29367 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18602-11466/kubeconfig
	I0505 21:18:16.338272   29367 kapi.go:59] client config for ha-322980: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/client.crt", KeyFile:"/home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/client.key", CAFile:"/home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02a40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0505 21:18:16.338366   29367 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.178:8443
	I0505 21:18:16.338649   29367 node_ready.go:35] waiting up to 6m0s for node "ha-322980-m02" to be "Ready" ...
	I0505 21:18:16.338754   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:16.338767   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:16.338778   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:16.338788   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:16.349870   29367 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0505 21:18:16.839766   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:16.839788   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:16.839798   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:16.839805   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:16.852850   29367 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0505 21:18:17.338946   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:17.338970   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:17.338977   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:17.338980   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:17.343659   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:18:17.838912   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:17.838939   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:17.838947   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:17.838953   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:17.845453   29367 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0505 21:18:18.339337   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:18.339359   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:18.339366   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:18.339369   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:18.342594   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:18:18.343349   29367 node_ready.go:53] node "ha-322980-m02" has status "Ready":"False"
	I0505 21:18:18.839700   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:18.839725   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:18.839735   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:18.839741   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:18.842741   29367 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 21:18:19.339816   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:19.339842   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:19.339852   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:19.339857   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:19.343050   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:18:19.839284   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:19.839309   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:19.839321   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:19.839328   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:19.842392   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:18:20.339523   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:20.339546   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:20.339556   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:20.339561   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:20.415502   29367 round_trippers.go:574] Response Status: 200 OK in 75 milliseconds
	I0505 21:18:20.416443   29367 node_ready.go:53] node "ha-322980-m02" has status "Ready":"False"
	I0505 21:18:20.839860   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:20.839882   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:20.839892   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:20.839897   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:20.843751   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:18:21.338821   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:21.338848   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:21.338857   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:21.338861   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:21.342545   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:18:21.839191   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:21.839209   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:21.839214   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:21.839217   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:21.842470   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:18:22.339079   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:22.339106   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:22.339114   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:22.339119   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:22.343631   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:18:22.839591   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:22.839612   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:22.839618   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:22.839622   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:22.843834   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:18:22.845066   29367 node_ready.go:53] node "ha-322980-m02" has status "Ready":"False"
	I0505 21:18:23.339201   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:23.339227   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:23.339238   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:23.339246   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:23.346734   29367 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0505 21:18:23.839809   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:23.839836   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:23.839847   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:23.839851   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:23.843494   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:18:24.339660   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:24.339680   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:24.339686   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:24.339691   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:24.343340   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:18:24.344220   29367 node_ready.go:49] node "ha-322980-m02" has status "Ready":"True"
	I0505 21:18:24.344241   29367 node_ready.go:38] duration metric: took 8.0055689s for node "ha-322980-m02" to be "Ready" ...
	I0505 21:18:24.344251   29367 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0505 21:18:24.344308   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods
	I0505 21:18:24.344319   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:24.344326   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:24.344329   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:24.349121   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:18:24.355038   29367 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-78zmw" in "kube-system" namespace to be "Ready" ...
	I0505 21:18:24.355104   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-78zmw
	I0505 21:18:24.355110   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:24.355117   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:24.355123   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:24.358283   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:18:24.359260   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980
	I0505 21:18:24.359272   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:24.359278   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:24.359281   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:24.362177   29367 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 21:18:24.362893   29367 pod_ready.go:92] pod "coredns-7db6d8ff4d-78zmw" in "kube-system" namespace has status "Ready":"True"
	I0505 21:18:24.362908   29367 pod_ready.go:81] duration metric: took 7.847121ms for pod "coredns-7db6d8ff4d-78zmw" in "kube-system" namespace to be "Ready" ...
	I0505 21:18:24.362919   29367 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fqt45" in "kube-system" namespace to be "Ready" ...
	I0505 21:18:24.362972   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fqt45
	I0505 21:18:24.362982   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:24.362989   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:24.362994   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:24.365593   29367 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 21:18:24.366298   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980
	I0505 21:18:24.366313   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:24.366323   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:24.366329   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:24.368668   29367 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 21:18:24.369149   29367 pod_ready.go:92] pod "coredns-7db6d8ff4d-fqt45" in "kube-system" namespace has status "Ready":"True"
	I0505 21:18:24.369164   29367 pod_ready.go:81] duration metric: took 6.237663ms for pod "coredns-7db6d8ff4d-fqt45" in "kube-system" namespace to be "Ready" ...
	I0505 21:18:24.369172   29367 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-322980" in "kube-system" namespace to be "Ready" ...
	I0505 21:18:24.369224   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/etcd-ha-322980
	I0505 21:18:24.369235   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:24.369242   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:24.369247   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:24.371543   29367 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 21:18:24.372131   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980
	I0505 21:18:24.372149   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:24.372157   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:24.372162   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:24.375096   29367 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 21:18:24.375588   29367 pod_ready.go:92] pod "etcd-ha-322980" in "kube-system" namespace has status "Ready":"True"
	I0505 21:18:24.375609   29367 pod_ready.go:81] duration metric: took 6.427885ms for pod "etcd-ha-322980" in "kube-system" namespace to be "Ready" ...
	I0505 21:18:24.375620   29367 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-322980-m02" in "kube-system" namespace to be "Ready" ...
	I0505 21:18:24.375672   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/etcd-ha-322980-m02
	I0505 21:18:24.375685   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:24.375695   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:24.375702   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:24.378107   29367 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 21:18:24.378807   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:24.378821   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:24.378829   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:24.378834   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:24.381464   29367 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 21:18:24.876213   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/etcd-ha-322980-m02
	I0505 21:18:24.876235   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:24.876242   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:24.876247   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:24.879744   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:18:24.880247   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:24.880261   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:24.880268   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:24.880272   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:24.883094   29367 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 21:18:24.883798   29367 pod_ready.go:92] pod "etcd-ha-322980-m02" in "kube-system" namespace has status "Ready":"True"
	I0505 21:18:24.883816   29367 pod_ready.go:81] duration metric: took 508.185465ms for pod "etcd-ha-322980-m02" in "kube-system" namespace to be "Ready" ...
	I0505 21:18:24.883830   29367 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-322980" in "kube-system" namespace to be "Ready" ...
	I0505 21:18:24.940083   29367 request.go:629] Waited for 56.203588ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-322980
	I0505 21:18:24.940159   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-322980
	I0505 21:18:24.940167   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:24.940184   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:24.940197   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:24.943603   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:18:25.140694   29367 request.go:629] Waited for 196.376779ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/nodes/ha-322980
	I0505 21:18:25.140751   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980
	I0505 21:18:25.140757   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:25.140764   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:25.140768   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:25.144370   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:18:25.145217   29367 pod_ready.go:92] pod "kube-apiserver-ha-322980" in "kube-system" namespace has status "Ready":"True"
	I0505 21:18:25.145238   29367 pod_ready.go:81] duration metric: took 261.40121ms for pod "kube-apiserver-ha-322980" in "kube-system" namespace to be "Ready" ...
	I0505 21:18:25.145251   29367 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-322980-m02" in "kube-system" namespace to be "Ready" ...
	I0505 21:18:25.340684   29367 request.go:629] Waited for 195.369973ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-322980-m02
	I0505 21:18:25.340755   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-322980-m02
	I0505 21:18:25.340760   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:25.340767   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:25.340778   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:25.344364   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:18:25.540531   29367 request.go:629] Waited for 195.298535ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:25.540580   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:25.540585   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:25.540594   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:25.540599   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:25.544432   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:18:25.545156   29367 pod_ready.go:92] pod "kube-apiserver-ha-322980-m02" in "kube-system" namespace has status "Ready":"True"
	I0505 21:18:25.545174   29367 pod_ready.go:81] duration metric: took 399.915568ms for pod "kube-apiserver-ha-322980-m02" in "kube-system" namespace to be "Ready" ...
	I0505 21:18:25.545190   29367 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-322980" in "kube-system" namespace to be "Ready" ...
	I0505 21:18:25.740306   29367 request.go:629] Waited for 195.054768ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-322980
	I0505 21:18:25.740357   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-322980
	I0505 21:18:25.740362   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:25.740368   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:25.740375   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:25.743743   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:18:25.940062   29367 request.go:629] Waited for 195.368531ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/nodes/ha-322980
	I0505 21:18:25.940115   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980
	I0505 21:18:25.940120   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:25.940128   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:25.940135   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:25.943974   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:18:25.944861   29367 pod_ready.go:92] pod "kube-controller-manager-ha-322980" in "kube-system" namespace has status "Ready":"True"
	I0505 21:18:25.944882   29367 pod_ready.go:81] duration metric: took 399.684428ms for pod "kube-controller-manager-ha-322980" in "kube-system" namespace to be "Ready" ...
	I0505 21:18:25.944894   29367 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-322980-m02" in "kube-system" namespace to be "Ready" ...
	I0505 21:18:26.139961   29367 request.go:629] Waited for 195.008004ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-322980-m02
	I0505 21:18:26.140022   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-322980-m02
	I0505 21:18:26.140027   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:26.140034   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:26.140038   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:26.143851   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:18:26.340135   29367 request.go:629] Waited for 195.377838ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:26.340201   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:26.340209   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:26.340220   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:26.340227   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:26.342958   29367 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 21:18:26.540102   29367 request.go:629] Waited for 94.309581ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-322980-m02
	I0505 21:18:26.540178   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-322980-m02
	I0505 21:18:26.540186   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:26.540203   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:26.540210   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:26.543990   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:18:26.740674   29367 request.go:629] Waited for 195.628445ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:26.740729   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:26.740734   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:26.740741   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:26.740746   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:26.744188   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:18:26.946151   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-322980-m02
	I0505 21:18:26.946177   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:26.946196   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:26.946204   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:26.950287   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:18:27.139810   29367 request.go:629] Waited for 188.283426ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:27.139873   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:27.139879   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:27.139886   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:27.139889   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:27.143491   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:18:27.445558   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-322980-m02
	I0505 21:18:27.445592   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:27.445600   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:27.445604   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:27.449930   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:18:27.539936   29367 request.go:629] Waited for 89.266345ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:27.539990   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:27.539996   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:27.540011   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:27.540025   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:27.543502   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:18:27.945667   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-322980-m02
	I0505 21:18:27.945694   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:27.945705   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:27.945710   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:27.949215   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:18:27.950124   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:27.950140   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:27.950147   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:27.950153   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:27.952800   29367 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 21:18:27.953588   29367 pod_ready.go:102] pod "kube-controller-manager-ha-322980-m02" in "kube-system" namespace has status "Ready":"False"
	I0505 21:18:28.445701   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-322980-m02
	I0505 21:18:28.445720   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:28.445727   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:28.445736   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:28.450099   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:18:28.451102   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:28.451119   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:28.451126   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:28.451131   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:28.454861   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:18:28.945870   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-322980-m02
	I0505 21:18:28.945894   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:28.945904   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:28.945909   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:28.955098   29367 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0505 21:18:28.956275   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:28.956293   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:28.956303   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:28.956309   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:28.959133   29367 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 21:18:28.959720   29367 pod_ready.go:92] pod "kube-controller-manager-ha-322980-m02" in "kube-system" namespace has status "Ready":"True"
	I0505 21:18:28.959743   29367 pod_ready.go:81] duration metric: took 3.014840076s for pod "kube-controller-manager-ha-322980-m02" in "kube-system" namespace to be "Ready" ...
	I0505 21:18:28.959760   29367 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8xdzd" in "kube-system" namespace to be "Ready" ...
	I0505 21:18:28.959811   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8xdzd
	I0505 21:18:28.959818   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:28.959825   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:28.959833   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:28.963439   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:18:29.140603   29367 request.go:629] Waited for 176.36773ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/nodes/ha-322980
	I0505 21:18:29.140701   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980
	I0505 21:18:29.140710   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:29.140723   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:29.140734   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:29.144786   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:18:29.145620   29367 pod_ready.go:92] pod "kube-proxy-8xdzd" in "kube-system" namespace has status "Ready":"True"
	I0505 21:18:29.145645   29367 pod_ready.go:81] duration metric: took 185.874614ms for pod "kube-proxy-8xdzd" in "kube-system" namespace to be "Ready" ...
	I0505 21:18:29.145659   29367 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wbf7q" in "kube-system" namespace to be "Ready" ...
	I0505 21:18:29.340089   29367 request.go:629] Waited for 194.359804ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wbf7q
	I0505 21:18:29.340174   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wbf7q
	I0505 21:18:29.340183   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:29.340215   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:29.340224   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:29.343873   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:18:29.540121   29367 request.go:629] Waited for 195.364212ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:29.540169   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:29.540174   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:29.540181   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:29.540185   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:29.543776   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:18:29.544577   29367 pod_ready.go:92] pod "kube-proxy-wbf7q" in "kube-system" namespace has status "Ready":"True"
	I0505 21:18:29.544597   29367 pod_ready.go:81] duration metric: took 398.928355ms for pod "kube-proxy-wbf7q" in "kube-system" namespace to be "Ready" ...
	I0505 21:18:29.544607   29367 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-322980" in "kube-system" namespace to be "Ready" ...
	I0505 21:18:29.740355   29367 request.go:629] Waited for 195.68113ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-322980
	I0505 21:18:29.740426   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-322980
	I0505 21:18:29.740436   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:29.740443   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:29.740447   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:29.744436   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:18:29.940665   29367 request.go:629] Waited for 195.379071ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/nodes/ha-322980
	I0505 21:18:29.940738   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980
	I0505 21:18:29.940746   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:29.940760   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:29.940765   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:29.944366   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:18:29.945150   29367 pod_ready.go:92] pod "kube-scheduler-ha-322980" in "kube-system" namespace has status "Ready":"True"
	I0505 21:18:29.945174   29367 pod_ready.go:81] duration metric: took 400.560267ms for pod "kube-scheduler-ha-322980" in "kube-system" namespace to be "Ready" ...
	I0505 21:18:29.945184   29367 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-322980-m02" in "kube-system" namespace to be "Ready" ...
	I0505 21:18:30.140364   29367 request.go:629] Waited for 195.10722ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-322980-m02
	I0505 21:18:30.140430   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-322980-m02
	I0505 21:18:30.140439   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:30.140448   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:30.140455   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:30.143967   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:18:30.339885   29367 request.go:629] Waited for 195.326358ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:30.339968   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:30.339977   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:30.339985   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:30.339995   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:30.344134   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:18:30.345057   29367 pod_ready.go:92] pod "kube-scheduler-ha-322980-m02" in "kube-system" namespace has status "Ready":"True"
	I0505 21:18:30.345076   29367 pod_ready.go:81] duration metric: took 399.88044ms for pod "kube-scheduler-ha-322980-m02" in "kube-system" namespace to be "Ready" ...
	I0505 21:18:30.345090   29367 pod_ready.go:38] duration metric: took 6.00082807s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0505 21:18:30.345107   29367 api_server.go:52] waiting for apiserver process to appear ...
	I0505 21:18:30.345160   29367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 21:18:30.365232   29367 api_server.go:72] duration metric: took 14.310585824s to wait for apiserver process to appear ...
	I0505 21:18:30.365262   29367 api_server.go:88] waiting for apiserver healthz status ...
	I0505 21:18:30.365284   29367 api_server.go:253] Checking apiserver healthz at https://192.168.39.178:8443/healthz ...
	I0505 21:18:30.372031   29367 api_server.go:279] https://192.168.39.178:8443/healthz returned 200:
	ok
	I0505 21:18:30.372097   29367 round_trippers.go:463] GET https://192.168.39.178:8443/version
	I0505 21:18:30.372102   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:30.372109   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:30.372114   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:30.373309   29367 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 21:18:30.373439   29367 api_server.go:141] control plane version: v1.30.0
	I0505 21:18:30.373465   29367 api_server.go:131] duration metric: took 8.19422ms to wait for apiserver health ...
	I0505 21:18:30.373475   29367 system_pods.go:43] waiting for kube-system pods to appear ...
	I0505 21:18:30.539803   29367 request.go:629] Waited for 166.253744ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods
	I0505 21:18:30.539871   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods
	I0505 21:18:30.539877   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:30.539898   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:30.539919   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:30.548300   29367 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0505 21:18:30.554208   29367 system_pods.go:59] 17 kube-system pods found
	I0505 21:18:30.554242   29367 system_pods.go:61] "coredns-7db6d8ff4d-78zmw" [e066e3ad-0574-44f9-acab-d7cec8b86788] Running
	I0505 21:18:30.554249   29367 system_pods.go:61] "coredns-7db6d8ff4d-fqt45" [27bdadca-f49c-4f50-b09c-07dd6067f39a] Running
	I0505 21:18:30.554253   29367 system_pods.go:61] "etcd-ha-322980" [71d18dfd-9116-470d-bc5b-808adc8d2440] Running
	I0505 21:18:30.554256   29367 system_pods.go:61] "etcd-ha-322980-m02" [d38daa43-9f21-433e-8f83-67d492e7eb6f] Running
	I0505 21:18:30.554259   29367 system_pods.go:61] "kindnet-lmgkm" [78b6e816-d020-4105-b15c-3142323a6627] Running
	I0505 21:18:30.554261   29367 system_pods.go:61] "kindnet-lwtnx" [4033535e-69f1-426c-bb17-831fad6336d5] Running
	I0505 21:18:30.554265   29367 system_pods.go:61] "kube-apiserver-ha-322980" [feaf1c0e-9d36-499a-861f-e92298c928b8] Running
	I0505 21:18:30.554268   29367 system_pods.go:61] "kube-apiserver-ha-322980-m02" [b5d35b3b-53cb-4dc7-9ecd-3cea1362ac0e] Running
	I0505 21:18:30.554272   29367 system_pods.go:61] "kube-controller-manager-ha-322980" [e7bf9f5c-70c2-46a5-bb4d-861a17fd3d64] Running
	I0505 21:18:30.554276   29367 system_pods.go:61] "kube-controller-manager-ha-322980-m02" [7a85d624-043e-4429-afc4-831a59c6f349] Running
	I0505 21:18:30.554281   29367 system_pods.go:61] "kube-proxy-8xdzd" [d0b6492d-c0f5-45dd-8482-c447b81daa66] Running
	I0505 21:18:30.554284   29367 system_pods.go:61] "kube-proxy-wbf7q" [ae43ac77-d16b-4f36-8c27-23d1cf0431e3] Running
	I0505 21:18:30.554286   29367 system_pods.go:61] "kube-scheduler-ha-322980" [b77aa3a9-f911-42e0-9c83-91ae75a0424c] Running
	I0505 21:18:30.554289   29367 system_pods.go:61] "kube-scheduler-ha-322980-m02" [d011a166-d283-4933-a48f-eb36000d04a1] Running
	I0505 21:18:30.554292   29367 system_pods.go:61] "kube-vip-ha-322980" [8743dbcc-49f9-46e8-8088-cd5020429c08] Running
	I0505 21:18:30.554295   29367 system_pods.go:61] "kube-vip-ha-322980-m02" [3cacc089-9d3d-4da6-9bce-0777ffe737d1] Running
	I0505 21:18:30.554298   29367 system_pods.go:61] "storage-provisioner" [bc212ac3-7499-4edc-b5a5-622b0bd4a891] Running
	I0505 21:18:30.554304   29367 system_pods.go:74] duration metric: took 180.821839ms to wait for pod list to return data ...
	I0505 21:18:30.554314   29367 default_sa.go:34] waiting for default service account to be created ...
	I0505 21:18:30.739678   29367 request.go:629] Waited for 185.280789ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/namespaces/default/serviceaccounts
	I0505 21:18:30.739727   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/default/serviceaccounts
	I0505 21:18:30.739731   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:30.739738   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:30.739743   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:30.743560   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:18:30.743780   29367 default_sa.go:45] found service account: "default"
	I0505 21:18:30.743797   29367 default_sa.go:55] duration metric: took 189.476335ms for default service account to be created ...
	I0505 21:18:30.743804   29367 system_pods.go:116] waiting for k8s-apps to be running ...
	I0505 21:18:30.940411   29367 request.go:629] Waited for 196.536289ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods
	I0505 21:18:30.940478   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods
	I0505 21:18:30.940486   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:30.940494   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:30.940500   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:30.947561   29367 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0505 21:18:30.953662   29367 system_pods.go:86] 17 kube-system pods found
	I0505 21:18:30.953685   29367 system_pods.go:89] "coredns-7db6d8ff4d-78zmw" [e066e3ad-0574-44f9-acab-d7cec8b86788] Running
	I0505 21:18:30.953691   29367 system_pods.go:89] "coredns-7db6d8ff4d-fqt45" [27bdadca-f49c-4f50-b09c-07dd6067f39a] Running
	I0505 21:18:30.953697   29367 system_pods.go:89] "etcd-ha-322980" [71d18dfd-9116-470d-bc5b-808adc8d2440] Running
	I0505 21:18:30.953703   29367 system_pods.go:89] "etcd-ha-322980-m02" [d38daa43-9f21-433e-8f83-67d492e7eb6f] Running
	I0505 21:18:30.953709   29367 system_pods.go:89] "kindnet-lmgkm" [78b6e816-d020-4105-b15c-3142323a6627] Running
	I0505 21:18:30.953715   29367 system_pods.go:89] "kindnet-lwtnx" [4033535e-69f1-426c-bb17-831fad6336d5] Running
	I0505 21:18:30.953724   29367 system_pods.go:89] "kube-apiserver-ha-322980" [feaf1c0e-9d36-499a-861f-e92298c928b8] Running
	I0505 21:18:30.953731   29367 system_pods.go:89] "kube-apiserver-ha-322980-m02" [b5d35b3b-53cb-4dc7-9ecd-3cea1362ac0e] Running
	I0505 21:18:30.953741   29367 system_pods.go:89] "kube-controller-manager-ha-322980" [e7bf9f5c-70c2-46a5-bb4d-861a17fd3d64] Running
	I0505 21:18:30.953750   29367 system_pods.go:89] "kube-controller-manager-ha-322980-m02" [7a85d624-043e-4429-afc4-831a59c6f349] Running
	I0505 21:18:30.953755   29367 system_pods.go:89] "kube-proxy-8xdzd" [d0b6492d-c0f5-45dd-8482-c447b81daa66] Running
	I0505 21:18:30.953761   29367 system_pods.go:89] "kube-proxy-wbf7q" [ae43ac77-d16b-4f36-8c27-23d1cf0431e3] Running
	I0505 21:18:30.953765   29367 system_pods.go:89] "kube-scheduler-ha-322980" [b77aa3a9-f911-42e0-9c83-91ae75a0424c] Running
	I0505 21:18:30.953771   29367 system_pods.go:89] "kube-scheduler-ha-322980-m02" [d011a166-d283-4933-a48f-eb36000d04a1] Running
	I0505 21:18:30.953775   29367 system_pods.go:89] "kube-vip-ha-322980" [8743dbcc-49f9-46e8-8088-cd5020429c08] Running
	I0505 21:18:30.953781   29367 system_pods.go:89] "kube-vip-ha-322980-m02" [3cacc089-9d3d-4da6-9bce-0777ffe737d1] Running
	I0505 21:18:30.953784   29367 system_pods.go:89] "storage-provisioner" [bc212ac3-7499-4edc-b5a5-622b0bd4a891] Running
	I0505 21:18:30.953792   29367 system_pods.go:126] duration metric: took 209.983933ms to wait for k8s-apps to be running ...
	I0505 21:18:30.953802   29367 system_svc.go:44] waiting for kubelet service to be running ....
	I0505 21:18:30.953853   29367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 21:18:30.972504   29367 system_svc.go:56] duration metric: took 18.696692ms WaitForService to wait for kubelet
	I0505 21:18:30.972524   29367 kubeadm.go:576] duration metric: took 14.91788416s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0505 21:18:30.972545   29367 node_conditions.go:102] verifying NodePressure condition ...
	I0505 21:18:31.140010   29367 request.go:629] Waited for 167.398505ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/nodes
	I0505 21:18:31.140102   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes
	I0505 21:18:31.140110   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:31.140120   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:31.140127   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:31.144327   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:18:31.145063   29367 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0505 21:18:31.145087   29367 node_conditions.go:123] node cpu capacity is 2
	I0505 21:18:31.145099   29367 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0505 21:18:31.145103   29367 node_conditions.go:123] node cpu capacity is 2
	I0505 21:18:31.145114   29367 node_conditions.go:105] duration metric: took 172.561353ms to run NodePressure ...
	I0505 21:18:31.145132   29367 start.go:240] waiting for startup goroutines ...
	I0505 21:18:31.145159   29367 start.go:254] writing updated cluster config ...
	I0505 21:18:31.147465   29367 out.go:177] 
	I0505 21:18:31.149170   29367 config.go:182] Loaded profile config "ha-322980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 21:18:31.149261   29367 profile.go:143] Saving config to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/config.json ...
	I0505 21:18:31.151375   29367 out.go:177] * Starting "ha-322980-m03" control-plane node in "ha-322980" cluster
	I0505 21:18:31.152584   29367 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0505 21:18:31.152610   29367 cache.go:56] Caching tarball of preloaded images
	I0505 21:18:31.152705   29367 preload.go:173] Found /home/jenkins/minikube-integration/18602-11466/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0505 21:18:31.152717   29367 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0505 21:18:31.152814   29367 profile.go:143] Saving config to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/config.json ...
	I0505 21:18:31.152975   29367 start.go:360] acquireMachinesLock for ha-322980-m03: {Name:mk7f653ea73351d572a4896c6b37d2e7aa44b4ac Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 21:18:31.153022   29367 start.go:364] duration metric: took 22.512µs to acquireMachinesLock for "ha-322980-m03"
	I0505 21:18:31.153039   29367 start.go:93] Provisioning new machine with config: &{Name:ha-322980 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:ha-322980 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.178 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0505 21:18:31.153130   29367 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0505 21:18:31.154658   29367 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0505 21:18:31.154759   29367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:18:31.154799   29367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:18:31.170539   29367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45151
	I0505 21:18:31.170935   29367 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:18:31.171430   29367 main.go:141] libmachine: Using API Version  1
	I0505 21:18:31.171459   29367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:18:31.171810   29367 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:18:31.172052   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetMachineName
	I0505 21:18:31.172220   29367 main.go:141] libmachine: (ha-322980-m03) Calling .DriverName
	I0505 21:18:31.172411   29367 start.go:159] libmachine.API.Create for "ha-322980" (driver="kvm2")
	I0505 21:18:31.172435   29367 client.go:168] LocalClient.Create starting
	I0505 21:18:31.172472   29367 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem
	I0505 21:18:31.172512   29367 main.go:141] libmachine: Decoding PEM data...
	I0505 21:18:31.172527   29367 main.go:141] libmachine: Parsing certificate...
	I0505 21:18:31.172596   29367 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem
	I0505 21:18:31.172625   29367 main.go:141] libmachine: Decoding PEM data...
	I0505 21:18:31.172643   29367 main.go:141] libmachine: Parsing certificate...
	I0505 21:18:31.172668   29367 main.go:141] libmachine: Running pre-create checks...
	I0505 21:18:31.172679   29367 main.go:141] libmachine: (ha-322980-m03) Calling .PreCreateCheck
	I0505 21:18:31.172846   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetConfigRaw
	I0505 21:18:31.173297   29367 main.go:141] libmachine: Creating machine...
	I0505 21:18:31.173311   29367 main.go:141] libmachine: (ha-322980-m03) Calling .Create
	I0505 21:18:31.173452   29367 main.go:141] libmachine: (ha-322980-m03) Creating KVM machine...
	I0505 21:18:31.174934   29367 main.go:141] libmachine: (ha-322980-m03) DBG | found existing default KVM network
	I0505 21:18:31.175053   29367 main.go:141] libmachine: (ha-322980-m03) DBG | found existing private KVM network mk-ha-322980
	I0505 21:18:31.175208   29367 main.go:141] libmachine: (ha-322980-m03) Setting up store path in /home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m03 ...
	I0505 21:18:31.175237   29367 main.go:141] libmachine: (ha-322980-m03) Building disk image from file:///home/jenkins/minikube-integration/18602-11466/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso
	I0505 21:18:31.175319   29367 main.go:141] libmachine: (ha-322980-m03) DBG | I0505 21:18:31.175188   30843 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18602-11466/.minikube
	I0505 21:18:31.175433   29367 main.go:141] libmachine: (ha-322980-m03) Downloading /home/jenkins/minikube-integration/18602-11466/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18602-11466/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso...
	I0505 21:18:31.410349   29367 main.go:141] libmachine: (ha-322980-m03) DBG | I0505 21:18:31.410225   30843 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m03/id_rsa...
	I0505 21:18:31.506568   29367 main.go:141] libmachine: (ha-322980-m03) DBG | I0505 21:18:31.506471   30843 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m03/ha-322980-m03.rawdisk...
	I0505 21:18:31.506601   29367 main.go:141] libmachine: (ha-322980-m03) DBG | Writing magic tar header
	I0505 21:18:31.506617   29367 main.go:141] libmachine: (ha-322980-m03) DBG | Writing SSH key tar header
	I0505 21:18:31.506634   29367 main.go:141] libmachine: (ha-322980-m03) DBG | I0505 21:18:31.506601   30843 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m03 ...
	I0505 21:18:31.506776   29367 main.go:141] libmachine: (ha-322980-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m03
	I0505 21:18:31.506806   29367 main.go:141] libmachine: (ha-322980-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18602-11466/.minikube/machines
	I0505 21:18:31.506821   29367 main.go:141] libmachine: (ha-322980-m03) Setting executable bit set on /home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m03 (perms=drwx------)
	I0505 21:18:31.506842   29367 main.go:141] libmachine: (ha-322980-m03) Setting executable bit set on /home/jenkins/minikube-integration/18602-11466/.minikube/machines (perms=drwxr-xr-x)
	I0505 21:18:31.506856   29367 main.go:141] libmachine: (ha-322980-m03) Setting executable bit set on /home/jenkins/minikube-integration/18602-11466/.minikube (perms=drwxr-xr-x)
	I0505 21:18:31.506875   29367 main.go:141] libmachine: (ha-322980-m03) Setting executable bit set on /home/jenkins/minikube-integration/18602-11466 (perms=drwxrwxr-x)
	I0505 21:18:31.506889   29367 main.go:141] libmachine: (ha-322980-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18602-11466/.minikube
	I0505 21:18:31.506902   29367 main.go:141] libmachine: (ha-322980-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0505 21:18:31.506918   29367 main.go:141] libmachine: (ha-322980-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0505 21:18:31.506928   29367 main.go:141] libmachine: (ha-322980-m03) Creating domain...
	I0505 21:18:31.506940   29367 main.go:141] libmachine: (ha-322980-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18602-11466
	I0505 21:18:31.506951   29367 main.go:141] libmachine: (ha-322980-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0505 21:18:31.506978   29367 main.go:141] libmachine: (ha-322980-m03) DBG | Checking permissions on dir: /home/jenkins
	I0505 21:18:31.507002   29367 main.go:141] libmachine: (ha-322980-m03) DBG | Checking permissions on dir: /home
	I0505 21:18:31.507013   29367 main.go:141] libmachine: (ha-322980-m03) DBG | Skipping /home - not owner
	I0505 21:18:31.508005   29367 main.go:141] libmachine: (ha-322980-m03) define libvirt domain using xml: 
	I0505 21:18:31.508026   29367 main.go:141] libmachine: (ha-322980-m03) <domain type='kvm'>
	I0505 21:18:31.508037   29367 main.go:141] libmachine: (ha-322980-m03)   <name>ha-322980-m03</name>
	I0505 21:18:31.508049   29367 main.go:141] libmachine: (ha-322980-m03)   <memory unit='MiB'>2200</memory>
	I0505 21:18:31.508058   29367 main.go:141] libmachine: (ha-322980-m03)   <vcpu>2</vcpu>
	I0505 21:18:31.508065   29367 main.go:141] libmachine: (ha-322980-m03)   <features>
	I0505 21:18:31.508072   29367 main.go:141] libmachine: (ha-322980-m03)     <acpi/>
	I0505 21:18:31.508078   29367 main.go:141] libmachine: (ha-322980-m03)     <apic/>
	I0505 21:18:31.508085   29367 main.go:141] libmachine: (ha-322980-m03)     <pae/>
	I0505 21:18:31.508091   29367 main.go:141] libmachine: (ha-322980-m03)     
	I0505 21:18:31.508100   29367 main.go:141] libmachine: (ha-322980-m03)   </features>
	I0505 21:18:31.508109   29367 main.go:141] libmachine: (ha-322980-m03)   <cpu mode='host-passthrough'>
	I0505 21:18:31.508125   29367 main.go:141] libmachine: (ha-322980-m03)   
	I0505 21:18:31.508137   29367 main.go:141] libmachine: (ha-322980-m03)   </cpu>
	I0505 21:18:31.508146   29367 main.go:141] libmachine: (ha-322980-m03)   <os>
	I0505 21:18:31.508157   29367 main.go:141] libmachine: (ha-322980-m03)     <type>hvm</type>
	I0505 21:18:31.508167   29367 main.go:141] libmachine: (ha-322980-m03)     <boot dev='cdrom'/>
	I0505 21:18:31.508190   29367 main.go:141] libmachine: (ha-322980-m03)     <boot dev='hd'/>
	I0505 21:18:31.508203   29367 main.go:141] libmachine: (ha-322980-m03)     <bootmenu enable='no'/>
	I0505 21:18:31.508214   29367 main.go:141] libmachine: (ha-322980-m03)   </os>
	I0505 21:18:31.508226   29367 main.go:141] libmachine: (ha-322980-m03)   <devices>
	I0505 21:18:31.508236   29367 main.go:141] libmachine: (ha-322980-m03)     <disk type='file' device='cdrom'>
	I0505 21:18:31.508254   29367 main.go:141] libmachine: (ha-322980-m03)       <source file='/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m03/boot2docker.iso'/>
	I0505 21:18:31.508270   29367 main.go:141] libmachine: (ha-322980-m03)       <target dev='hdc' bus='scsi'/>
	I0505 21:18:31.508282   29367 main.go:141] libmachine: (ha-322980-m03)       <readonly/>
	I0505 21:18:31.508293   29367 main.go:141] libmachine: (ha-322980-m03)     </disk>
	I0505 21:18:31.508304   29367 main.go:141] libmachine: (ha-322980-m03)     <disk type='file' device='disk'>
	I0505 21:18:31.508319   29367 main.go:141] libmachine: (ha-322980-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0505 21:18:31.508336   29367 main.go:141] libmachine: (ha-322980-m03)       <source file='/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m03/ha-322980-m03.rawdisk'/>
	I0505 21:18:31.508352   29367 main.go:141] libmachine: (ha-322980-m03)       <target dev='hda' bus='virtio'/>
	I0505 21:18:31.508380   29367 main.go:141] libmachine: (ha-322980-m03)     </disk>
	I0505 21:18:31.508400   29367 main.go:141] libmachine: (ha-322980-m03)     <interface type='network'>
	I0505 21:18:31.508411   29367 main.go:141] libmachine: (ha-322980-m03)       <source network='mk-ha-322980'/>
	I0505 21:18:31.508423   29367 main.go:141] libmachine: (ha-322980-m03)       <model type='virtio'/>
	I0505 21:18:31.508431   29367 main.go:141] libmachine: (ha-322980-m03)     </interface>
	I0505 21:18:31.508442   29367 main.go:141] libmachine: (ha-322980-m03)     <interface type='network'>
	I0505 21:18:31.508453   29367 main.go:141] libmachine: (ha-322980-m03)       <source network='default'/>
	I0505 21:18:31.508464   29367 main.go:141] libmachine: (ha-322980-m03)       <model type='virtio'/>
	I0505 21:18:31.508483   29367 main.go:141] libmachine: (ha-322980-m03)     </interface>
	I0505 21:18:31.508499   29367 main.go:141] libmachine: (ha-322980-m03)     <serial type='pty'>
	I0505 21:18:31.508511   29367 main.go:141] libmachine: (ha-322980-m03)       <target port='0'/>
	I0505 21:18:31.508522   29367 main.go:141] libmachine: (ha-322980-m03)     </serial>
	I0505 21:18:31.508534   29367 main.go:141] libmachine: (ha-322980-m03)     <console type='pty'>
	I0505 21:18:31.508545   29367 main.go:141] libmachine: (ha-322980-m03)       <target type='serial' port='0'/>
	I0505 21:18:31.508557   29367 main.go:141] libmachine: (ha-322980-m03)     </console>
	I0505 21:18:31.508567   29367 main.go:141] libmachine: (ha-322980-m03)     <rng model='virtio'>
	I0505 21:18:31.508601   29367 main.go:141] libmachine: (ha-322980-m03)       <backend model='random'>/dev/random</backend>
	I0505 21:18:31.508626   29367 main.go:141] libmachine: (ha-322980-m03)     </rng>
	I0505 21:18:31.508636   29367 main.go:141] libmachine: (ha-322980-m03)     
	I0505 21:18:31.508657   29367 main.go:141] libmachine: (ha-322980-m03)     
	I0505 21:18:31.508674   29367 main.go:141] libmachine: (ha-322980-m03)   </devices>
	I0505 21:18:31.508689   29367 main.go:141] libmachine: (ha-322980-m03) </domain>
	I0505 21:18:31.508698   29367 main.go:141] libmachine: (ha-322980-m03) 
	I0505 21:18:31.515278   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:90:b2:60 in network default
	I0505 21:18:31.515919   29367 main.go:141] libmachine: (ha-322980-m03) Ensuring networks are active...
	I0505 21:18:31.515941   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:31.516616   29367 main.go:141] libmachine: (ha-322980-m03) Ensuring network default is active
	I0505 21:18:31.517069   29367 main.go:141] libmachine: (ha-322980-m03) Ensuring network mk-ha-322980 is active
	I0505 21:18:31.517420   29367 main.go:141] libmachine: (ha-322980-m03) Getting domain xml...
	I0505 21:18:31.518170   29367 main.go:141] libmachine: (ha-322980-m03) Creating domain...
	I0505 21:18:32.728189   29367 main.go:141] libmachine: (ha-322980-m03) Waiting to get IP...
	I0505 21:18:32.729118   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:32.729602   29367 main.go:141] libmachine: (ha-322980-m03) DBG | unable to find current IP address of domain ha-322980-m03 in network mk-ha-322980
	I0505 21:18:32.729631   29367 main.go:141] libmachine: (ha-322980-m03) DBG | I0505 21:18:32.729550   30843 retry.go:31] will retry after 199.252104ms: waiting for machine to come up
	I0505 21:18:32.930028   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:32.930485   29367 main.go:141] libmachine: (ha-322980-m03) DBG | unable to find current IP address of domain ha-322980-m03 in network mk-ha-322980
	I0505 21:18:32.930513   29367 main.go:141] libmachine: (ha-322980-m03) DBG | I0505 21:18:32.930436   30843 retry.go:31] will retry after 253.528343ms: waiting for machine to come up
	I0505 21:18:33.185827   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:33.186234   29367 main.go:141] libmachine: (ha-322980-m03) DBG | unable to find current IP address of domain ha-322980-m03 in network mk-ha-322980
	I0505 21:18:33.186256   29367 main.go:141] libmachine: (ha-322980-m03) DBG | I0505 21:18:33.186211   30843 retry.go:31] will retry after 453.653869ms: waiting for machine to come up
	I0505 21:18:33.641714   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:33.642075   29367 main.go:141] libmachine: (ha-322980-m03) DBG | unable to find current IP address of domain ha-322980-m03 in network mk-ha-322980
	I0505 21:18:33.642101   29367 main.go:141] libmachine: (ha-322980-m03) DBG | I0505 21:18:33.642031   30843 retry.go:31] will retry after 423.63847ms: waiting for machine to come up
	I0505 21:18:34.067574   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:34.068005   29367 main.go:141] libmachine: (ha-322980-m03) DBG | unable to find current IP address of domain ha-322980-m03 in network mk-ha-322980
	I0505 21:18:34.068030   29367 main.go:141] libmachine: (ha-322980-m03) DBG | I0505 21:18:34.067963   30843 retry.go:31] will retry after 707.190206ms: waiting for machine to come up
	I0505 21:18:34.776598   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:34.777113   29367 main.go:141] libmachine: (ha-322980-m03) DBG | unable to find current IP address of domain ha-322980-m03 in network mk-ha-322980
	I0505 21:18:34.777137   29367 main.go:141] libmachine: (ha-322980-m03) DBG | I0505 21:18:34.777051   30843 retry.go:31] will retry after 823.896849ms: waiting for machine to come up
	I0505 21:18:35.603014   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:35.603418   29367 main.go:141] libmachine: (ha-322980-m03) DBG | unable to find current IP address of domain ha-322980-m03 in network mk-ha-322980
	I0505 21:18:35.603443   29367 main.go:141] libmachine: (ha-322980-m03) DBG | I0505 21:18:35.603372   30843 retry.go:31] will retry after 1.150013486s: waiting for machine to come up
	I0505 21:18:36.755487   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:36.755968   29367 main.go:141] libmachine: (ha-322980-m03) DBG | unable to find current IP address of domain ha-322980-m03 in network mk-ha-322980
	I0505 21:18:36.756006   29367 main.go:141] libmachine: (ha-322980-m03) DBG | I0505 21:18:36.755960   30843 retry.go:31] will retry after 1.125565148s: waiting for machine to come up
	I0505 21:18:37.882632   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:37.882961   29367 main.go:141] libmachine: (ha-322980-m03) DBG | unable to find current IP address of domain ha-322980-m03 in network mk-ha-322980
	I0505 21:18:37.882990   29367 main.go:141] libmachine: (ha-322980-m03) DBG | I0505 21:18:37.882924   30843 retry.go:31] will retry after 1.186554631s: waiting for machine to come up
	I0505 21:18:39.070675   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:39.071010   29367 main.go:141] libmachine: (ha-322980-m03) DBG | unable to find current IP address of domain ha-322980-m03 in network mk-ha-322980
	I0505 21:18:39.071034   29367 main.go:141] libmachine: (ha-322980-m03) DBG | I0505 21:18:39.070949   30843 retry.go:31] will retry after 2.150680496s: waiting for machine to come up
	I0505 21:18:41.223031   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:41.223557   29367 main.go:141] libmachine: (ha-322980-m03) DBG | unable to find current IP address of domain ha-322980-m03 in network mk-ha-322980
	I0505 21:18:41.223592   29367 main.go:141] libmachine: (ha-322980-m03) DBG | I0505 21:18:41.223476   30843 retry.go:31] will retry after 2.688830385s: waiting for machine to come up
	I0505 21:18:43.913880   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:43.914296   29367 main.go:141] libmachine: (ha-322980-m03) DBG | unable to find current IP address of domain ha-322980-m03 in network mk-ha-322980
	I0505 21:18:43.914317   29367 main.go:141] libmachine: (ha-322980-m03) DBG | I0505 21:18:43.914267   30843 retry.go:31] will retry after 2.277627535s: waiting for machine to come up
	I0505 21:18:46.193457   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:46.193888   29367 main.go:141] libmachine: (ha-322980-m03) DBG | unable to find current IP address of domain ha-322980-m03 in network mk-ha-322980
	I0505 21:18:46.193919   29367 main.go:141] libmachine: (ha-322980-m03) DBG | I0505 21:18:46.193839   30843 retry.go:31] will retry after 3.873768109s: waiting for machine to come up
	I0505 21:18:50.068786   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:50.069219   29367 main.go:141] libmachine: (ha-322980-m03) DBG | unable to find current IP address of domain ha-322980-m03 in network mk-ha-322980
	I0505 21:18:50.069249   29367 main.go:141] libmachine: (ha-322980-m03) DBG | I0505 21:18:50.069169   30843 retry.go:31] will retry after 4.135874367s: waiting for machine to come up
	I0505 21:18:54.208167   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:54.208555   29367 main.go:141] libmachine: (ha-322980-m03) Found IP for machine: 192.168.39.29
	I0505 21:18:54.208571   29367 main.go:141] libmachine: (ha-322980-m03) Reserving static IP address...
	I0505 21:18:54.208584   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has current primary IP address 192.168.39.29 and MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:54.208947   29367 main.go:141] libmachine: (ha-322980-m03) DBG | unable to find host DHCP lease matching {name: "ha-322980-m03", mac: "52:54:00:c6:64:b7", ip: "192.168.39.29"} in network mk-ha-322980
	I0505 21:18:54.279929   29367 main.go:141] libmachine: (ha-322980-m03) Reserved static IP address: 192.168.39.29
	I0505 21:18:54.279960   29367 main.go:141] libmachine: (ha-322980-m03) Waiting for SSH to be available...
	I0505 21:18:54.279971   29367 main.go:141] libmachine: (ha-322980-m03) DBG | Getting to WaitForSSH function...
	I0505 21:18:54.282838   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:54.283259   29367 main.go:141] libmachine: (ha-322980-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:c6:64:b7", ip: ""} in network mk-ha-322980
	I0505 21:18:54.283287   29367 main.go:141] libmachine: (ha-322980-m03) DBG | unable to find defined IP address of network mk-ha-322980 interface with MAC address 52:54:00:c6:64:b7
	I0505 21:18:54.283437   29367 main.go:141] libmachine: (ha-322980-m03) DBG | Using SSH client type: external
	I0505 21:18:54.283466   29367 main.go:141] libmachine: (ha-322980-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m03/id_rsa (-rw-------)
	I0505 21:18:54.283507   29367 main.go:141] libmachine: (ha-322980-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0505 21:18:54.283527   29367 main.go:141] libmachine: (ha-322980-m03) DBG | About to run SSH command:
	I0505 21:18:54.283545   29367 main.go:141] libmachine: (ha-322980-m03) DBG | exit 0
	I0505 21:18:54.287074   29367 main.go:141] libmachine: (ha-322980-m03) DBG | SSH cmd err, output: exit status 255: 
	I0505 21:18:54.287098   29367 main.go:141] libmachine: (ha-322980-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0505 21:18:54.287108   29367 main.go:141] libmachine: (ha-322980-m03) DBG | command : exit 0
	I0505 21:18:54.287113   29367 main.go:141] libmachine: (ha-322980-m03) DBG | err     : exit status 255
	I0505 21:18:54.287121   29367 main.go:141] libmachine: (ha-322980-m03) DBG | output  : 
	I0505 21:18:57.287660   29367 main.go:141] libmachine: (ha-322980-m03) DBG | Getting to WaitForSSH function...
	I0505 21:18:57.290086   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:57.290564   29367 main.go:141] libmachine: (ha-322980-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:64:b7", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:18:47 +0000 UTC Type:0 Mac:52:54:00:c6:64:b7 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ha-322980-m03 Clientid:01:52:54:00:c6:64:b7}
	I0505 21:18:57.290589   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined IP address 192.168.39.29 and MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:57.290738   29367 main.go:141] libmachine: (ha-322980-m03) DBG | Using SSH client type: external
	I0505 21:18:57.290759   29367 main.go:141] libmachine: (ha-322980-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m03/id_rsa (-rw-------)
	I0505 21:18:57.290813   29367 main.go:141] libmachine: (ha-322980-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.29 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0505 21:18:57.290839   29367 main.go:141] libmachine: (ha-322980-m03) DBG | About to run SSH command:
	I0505 21:18:57.290853   29367 main.go:141] libmachine: (ha-322980-m03) DBG | exit 0
	I0505 21:18:57.419820   29367 main.go:141] libmachine: (ha-322980-m03) DBG | SSH cmd err, output: <nil>: 
	I0505 21:18:57.420178   29367 main.go:141] libmachine: (ha-322980-m03) KVM machine creation complete!
	I0505 21:18:57.420458   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetConfigRaw
	I0505 21:18:57.420935   29367 main.go:141] libmachine: (ha-322980-m03) Calling .DriverName
	I0505 21:18:57.421107   29367 main.go:141] libmachine: (ha-322980-m03) Calling .DriverName
	I0505 21:18:57.421278   29367 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0505 21:18:57.421296   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetState
	I0505 21:18:57.422618   29367 main.go:141] libmachine: Detecting operating system of created instance...
	I0505 21:18:57.422637   29367 main.go:141] libmachine: Waiting for SSH to be available...
	I0505 21:18:57.422645   29367 main.go:141] libmachine: Getting to WaitForSSH function...
	I0505 21:18:57.422654   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHHostname
	I0505 21:18:57.424963   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:57.425355   29367 main.go:141] libmachine: (ha-322980-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:64:b7", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:18:47 +0000 UTC Type:0 Mac:52:54:00:c6:64:b7 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ha-322980-m03 Clientid:01:52:54:00:c6:64:b7}
	I0505 21:18:57.425382   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined IP address 192.168.39.29 and MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:57.425504   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHPort
	I0505 21:18:57.425653   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHKeyPath
	I0505 21:18:57.425798   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHKeyPath
	I0505 21:18:57.425929   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHUsername
	I0505 21:18:57.426085   29367 main.go:141] libmachine: Using SSH client type: native
	I0505 21:18:57.426328   29367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0505 21:18:57.426340   29367 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0505 21:18:57.535116   29367 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0505 21:18:57.535143   29367 main.go:141] libmachine: Detecting the provisioner...
	I0505 21:18:57.535155   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHHostname
	I0505 21:18:57.538912   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:57.539571   29367 main.go:141] libmachine: (ha-322980-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:64:b7", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:18:47 +0000 UTC Type:0 Mac:52:54:00:c6:64:b7 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ha-322980-m03 Clientid:01:52:54:00:c6:64:b7}
	I0505 21:18:57.539600   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined IP address 192.168.39.29 and MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:57.539793   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHPort
	I0505 21:18:57.540003   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHKeyPath
	I0505 21:18:57.540177   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHKeyPath
	I0505 21:18:57.540355   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHUsername
	I0505 21:18:57.540524   29367 main.go:141] libmachine: Using SSH client type: native
	I0505 21:18:57.540674   29367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0505 21:18:57.540684   29367 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0505 21:18:57.648740   29367 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0505 21:18:57.648802   29367 main.go:141] libmachine: found compatible host: buildroot
	I0505 21:18:57.648809   29367 main.go:141] libmachine: Provisioning with buildroot...
	I0505 21:18:57.648816   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetMachineName
	I0505 21:18:57.649084   29367 buildroot.go:166] provisioning hostname "ha-322980-m03"
	I0505 21:18:57.649112   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetMachineName
	I0505 21:18:57.649306   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHHostname
	I0505 21:18:57.652050   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:57.652395   29367 main.go:141] libmachine: (ha-322980-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:64:b7", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:18:47 +0000 UTC Type:0 Mac:52:54:00:c6:64:b7 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ha-322980-m03 Clientid:01:52:54:00:c6:64:b7}
	I0505 21:18:57.652423   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined IP address 192.168.39.29 and MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:57.652551   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHPort
	I0505 21:18:57.652717   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHKeyPath
	I0505 21:18:57.652856   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHKeyPath
	I0505 21:18:57.653045   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHUsername
	I0505 21:18:57.653216   29367 main.go:141] libmachine: Using SSH client type: native
	I0505 21:18:57.653393   29367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0505 21:18:57.653409   29367 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-322980-m03 && echo "ha-322980-m03" | sudo tee /etc/hostname
	I0505 21:18:57.780562   29367 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-322980-m03
	
	I0505 21:18:57.780594   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHHostname
	I0505 21:18:57.783541   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:57.783958   29367 main.go:141] libmachine: (ha-322980-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:64:b7", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:18:47 +0000 UTC Type:0 Mac:52:54:00:c6:64:b7 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ha-322980-m03 Clientid:01:52:54:00:c6:64:b7}
	I0505 21:18:57.783991   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined IP address 192.168.39.29 and MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:57.784191   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHPort
	I0505 21:18:57.784384   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHKeyPath
	I0505 21:18:57.784613   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHKeyPath
	I0505 21:18:57.784801   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHUsername
	I0505 21:18:57.784986   29367 main.go:141] libmachine: Using SSH client type: native
	I0505 21:18:57.785186   29367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0505 21:18:57.785218   29367 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-322980-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-322980-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-322980-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0505 21:18:57.906398   29367 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0505 21:18:57.906433   29367 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18602-11466/.minikube CaCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18602-11466/.minikube}
	I0505 21:18:57.906455   29367 buildroot.go:174] setting up certificates
	I0505 21:18:57.906469   29367 provision.go:84] configureAuth start
	I0505 21:18:57.906485   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetMachineName
	I0505 21:18:57.906749   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetIP
	I0505 21:18:57.909266   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:57.909659   29367 main.go:141] libmachine: (ha-322980-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:64:b7", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:18:47 +0000 UTC Type:0 Mac:52:54:00:c6:64:b7 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ha-322980-m03 Clientid:01:52:54:00:c6:64:b7}
	I0505 21:18:57.909690   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined IP address 192.168.39.29 and MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:57.909837   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHHostname
	I0505 21:18:57.911619   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:57.911964   29367 main.go:141] libmachine: (ha-322980-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:64:b7", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:18:47 +0000 UTC Type:0 Mac:52:54:00:c6:64:b7 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ha-322980-m03 Clientid:01:52:54:00:c6:64:b7}
	I0505 21:18:57.911990   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined IP address 192.168.39.29 and MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:57.912105   29367 provision.go:143] copyHostCerts
	I0505 21:18:57.912136   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem
	I0505 21:18:57.912173   29367 exec_runner.go:144] found /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem, removing ...
	I0505 21:18:57.912186   29367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem
	I0505 21:18:57.912292   29367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem (1078 bytes)
	I0505 21:18:57.912394   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem
	I0505 21:18:57.912420   29367 exec_runner.go:144] found /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem, removing ...
	I0505 21:18:57.912425   29367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem
	I0505 21:18:57.912463   29367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem (1123 bytes)
	I0505 21:18:57.912525   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem
	I0505 21:18:57.912548   29367 exec_runner.go:144] found /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem, removing ...
	I0505 21:18:57.912557   29367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem
	I0505 21:18:57.912592   29367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem (1675 bytes)
	I0505 21:18:57.912655   29367 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem org=jenkins.ha-322980-m03 san=[127.0.0.1 192.168.39.29 ha-322980-m03 localhost minikube]
	I0505 21:18:58.060988   29367 provision.go:177] copyRemoteCerts
	I0505 21:18:58.061038   29367 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0505 21:18:58.061059   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHHostname
	I0505 21:18:58.063811   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:58.064265   29367 main.go:141] libmachine: (ha-322980-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:64:b7", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:18:47 +0000 UTC Type:0 Mac:52:54:00:c6:64:b7 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ha-322980-m03 Clientid:01:52:54:00:c6:64:b7}
	I0505 21:18:58.064295   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined IP address 192.168.39.29 and MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:58.064465   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHPort
	I0505 21:18:58.064638   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHKeyPath
	I0505 21:18:58.064770   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHUsername
	I0505 21:18:58.064871   29367 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m03/id_rsa Username:docker}
	I0505 21:18:58.150293   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0505 21:18:58.150356   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0505 21:18:58.179798   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0505 21:18:58.179861   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0505 21:18:58.207727   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0505 21:18:58.207795   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0505 21:18:58.237652   29367 provision.go:87] duration metric: took 331.170378ms to configureAuth
	I0505 21:18:58.237680   29367 buildroot.go:189] setting minikube options for container-runtime
	I0505 21:18:58.237923   29367 config.go:182] Loaded profile config "ha-322980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 21:18:58.238003   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHHostname
	I0505 21:18:58.240687   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:58.241062   29367 main.go:141] libmachine: (ha-322980-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:64:b7", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:18:47 +0000 UTC Type:0 Mac:52:54:00:c6:64:b7 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ha-322980-m03 Clientid:01:52:54:00:c6:64:b7}
	I0505 21:18:58.241103   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined IP address 192.168.39.29 and MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:58.241279   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHPort
	I0505 21:18:58.241439   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHKeyPath
	I0505 21:18:58.241595   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHKeyPath
	I0505 21:18:58.241715   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHUsername
	I0505 21:18:58.241856   29367 main.go:141] libmachine: Using SSH client type: native
	I0505 21:18:58.242007   29367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0505 21:18:58.242022   29367 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0505 21:18:58.541225   29367 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0505 21:18:58.541253   29367 main.go:141] libmachine: Checking connection to Docker...
	I0505 21:18:58.541263   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetURL
	I0505 21:18:58.542725   29367 main.go:141] libmachine: (ha-322980-m03) DBG | Using libvirt version 6000000
	I0505 21:18:58.545160   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:58.545564   29367 main.go:141] libmachine: (ha-322980-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:64:b7", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:18:47 +0000 UTC Type:0 Mac:52:54:00:c6:64:b7 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ha-322980-m03 Clientid:01:52:54:00:c6:64:b7}
	I0505 21:18:58.545597   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined IP address 192.168.39.29 and MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:58.545773   29367 main.go:141] libmachine: Docker is up and running!
	I0505 21:18:58.545789   29367 main.go:141] libmachine: Reticulating splines...
	I0505 21:18:58.545797   29367 client.go:171] duration metric: took 27.373355272s to LocalClient.Create
	I0505 21:18:58.545824   29367 start.go:167] duration metric: took 27.373413959s to libmachine.API.Create "ha-322980"
	I0505 21:18:58.545836   29367 start.go:293] postStartSetup for "ha-322980-m03" (driver="kvm2")
	I0505 21:18:58.545851   29367 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0505 21:18:58.545874   29367 main.go:141] libmachine: (ha-322980-m03) Calling .DriverName
	I0505 21:18:58.546118   29367 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0505 21:18:58.546146   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHHostname
	I0505 21:18:58.548424   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:58.548850   29367 main.go:141] libmachine: (ha-322980-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:64:b7", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:18:47 +0000 UTC Type:0 Mac:52:54:00:c6:64:b7 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ha-322980-m03 Clientid:01:52:54:00:c6:64:b7}
	I0505 21:18:58.548880   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined IP address 192.168.39.29 and MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:58.548996   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHPort
	I0505 21:18:58.549168   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHKeyPath
	I0505 21:18:58.549342   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHUsername
	I0505 21:18:58.549511   29367 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m03/id_rsa Username:docker}
	I0505 21:18:58.635360   29367 ssh_runner.go:195] Run: cat /etc/os-release
	I0505 21:18:58.640495   29367 info.go:137] Remote host: Buildroot 2023.02.9
	I0505 21:18:58.640520   29367 filesync.go:126] Scanning /home/jenkins/minikube-integration/18602-11466/.minikube/addons for local assets ...
	I0505 21:18:58.640586   29367 filesync.go:126] Scanning /home/jenkins/minikube-integration/18602-11466/.minikube/files for local assets ...
	I0505 21:18:58.640675   29367 filesync.go:149] local asset: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem -> 187982.pem in /etc/ssl/certs
	I0505 21:18:58.640686   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem -> /etc/ssl/certs/187982.pem
	I0505 21:18:58.640790   29367 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0505 21:18:58.650860   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem --> /etc/ssl/certs/187982.pem (1708 bytes)
	I0505 21:18:58.678725   29367 start.go:296] duration metric: took 132.877481ms for postStartSetup
	I0505 21:18:58.678770   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetConfigRaw
	I0505 21:18:58.679495   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetIP
	I0505 21:18:58.682278   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:58.682582   29367 main.go:141] libmachine: (ha-322980-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:64:b7", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:18:47 +0000 UTC Type:0 Mac:52:54:00:c6:64:b7 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ha-322980-m03 Clientid:01:52:54:00:c6:64:b7}
	I0505 21:18:58.682607   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined IP address 192.168.39.29 and MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:58.682828   29367 profile.go:143] Saving config to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/config.json ...
	I0505 21:18:58.682993   29367 start.go:128] duration metric: took 27.529851966s to createHost
	I0505 21:18:58.683015   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHHostname
	I0505 21:18:58.685049   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:58.685436   29367 main.go:141] libmachine: (ha-322980-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:64:b7", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:18:47 +0000 UTC Type:0 Mac:52:54:00:c6:64:b7 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ha-322980-m03 Clientid:01:52:54:00:c6:64:b7}
	I0505 21:18:58.685465   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined IP address 192.168.39.29 and MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:58.685590   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHPort
	I0505 21:18:58.685769   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHKeyPath
	I0505 21:18:58.685932   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHKeyPath
	I0505 21:18:58.686098   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHUsername
	I0505 21:18:58.686238   29367 main.go:141] libmachine: Using SSH client type: native
	I0505 21:18:58.686386   29367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0505 21:18:58.686397   29367 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0505 21:18:58.796631   29367 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714943938.783062826
	
	I0505 21:18:58.796654   29367 fix.go:216] guest clock: 1714943938.783062826
	I0505 21:18:58.796663   29367 fix.go:229] Guest: 2024-05-05 21:18:58.783062826 +0000 UTC Remote: 2024-05-05 21:18:58.683005861 +0000 UTC m=+210.545765441 (delta=100.056965ms)
	I0505 21:18:58.796683   29367 fix.go:200] guest clock delta is within tolerance: 100.056965ms
	I0505 21:18:58.796693   29367 start.go:83] releasing machines lock for "ha-322980-m03", held for 27.643657327s
	I0505 21:18:58.796716   29367 main.go:141] libmachine: (ha-322980-m03) Calling .DriverName
	I0505 21:18:58.796972   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetIP
	I0505 21:18:58.799515   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:58.799874   29367 main.go:141] libmachine: (ha-322980-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:64:b7", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:18:47 +0000 UTC Type:0 Mac:52:54:00:c6:64:b7 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ha-322980-m03 Clientid:01:52:54:00:c6:64:b7}
	I0505 21:18:58.799900   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined IP address 192.168.39.29 and MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:58.802093   29367 out.go:177] * Found network options:
	I0505 21:18:58.803610   29367 out.go:177]   - NO_PROXY=192.168.39.178,192.168.39.228
	W0505 21:18:58.804940   29367 proxy.go:119] fail to check proxy env: Error ip not in block
	W0505 21:18:58.804962   29367 proxy.go:119] fail to check proxy env: Error ip not in block
	I0505 21:18:58.804977   29367 main.go:141] libmachine: (ha-322980-m03) Calling .DriverName
	I0505 21:18:58.805551   29367 main.go:141] libmachine: (ha-322980-m03) Calling .DriverName
	I0505 21:18:58.805782   29367 main.go:141] libmachine: (ha-322980-m03) Calling .DriverName
	I0505 21:18:58.805876   29367 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0505 21:18:58.805915   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHHostname
	W0505 21:18:58.805979   29367 proxy.go:119] fail to check proxy env: Error ip not in block
	W0505 21:18:58.806003   29367 proxy.go:119] fail to check proxy env: Error ip not in block
	I0505 21:18:58.806068   29367 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0505 21:18:58.806089   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHHostname
	I0505 21:18:58.808854   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:58.809186   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:58.809452   29367 main.go:141] libmachine: (ha-322980-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:64:b7", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:18:47 +0000 UTC Type:0 Mac:52:54:00:c6:64:b7 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ha-322980-m03 Clientid:01:52:54:00:c6:64:b7}
	I0505 21:18:58.809483   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined IP address 192.168.39.29 and MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:58.809630   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHPort
	I0505 21:18:58.809757   29367 main.go:141] libmachine: (ha-322980-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:64:b7", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:18:47 +0000 UTC Type:0 Mac:52:54:00:c6:64:b7 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ha-322980-m03 Clientid:01:52:54:00:c6:64:b7}
	I0505 21:18:58.809786   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined IP address 192.168.39.29 and MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:58.809791   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHKeyPath
	I0505 21:18:58.809969   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHUsername
	I0505 21:18:58.810009   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHPort
	I0505 21:18:58.810174   29367 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m03/id_rsa Username:docker}
	I0505 21:18:58.810227   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHKeyPath
	I0505 21:18:58.810370   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHUsername
	I0505 21:18:58.810498   29367 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m03/id_rsa Username:docker}
	I0505 21:18:59.055917   29367 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0505 21:18:59.063181   29367 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0505 21:18:59.063258   29367 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0505 21:18:59.082060   29367 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0505 21:18:59.082081   29367 start.go:494] detecting cgroup driver to use...
	I0505 21:18:59.082143   29367 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0505 21:18:59.102490   29367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0505 21:18:59.118744   29367 docker.go:217] disabling cri-docker service (if available) ...
	I0505 21:18:59.118798   29367 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0505 21:18:59.135687   29367 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0505 21:18:59.161082   29367 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0505 21:18:59.284170   29367 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0505 21:18:59.430037   29367 docker.go:233] disabling docker service ...
	I0505 21:18:59.430096   29367 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0505 21:18:59.445892   29367 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0505 21:18:59.459691   29367 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0505 21:18:59.612769   29367 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0505 21:18:59.773670   29367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0505 21:18:59.789087   29367 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0505 21:18:59.809428   29367 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0505 21:18:59.809496   29367 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:18:59.821422   29367 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0505 21:18:59.821488   29367 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:18:59.833237   29367 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:18:59.845606   29367 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:18:59.857286   29367 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0505 21:18:59.870600   29367 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:18:59.883365   29367 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:18:59.902940   29367 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:18:59.915118   29367 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0505 21:18:59.925710   29367 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0505 21:18:59.925762   29367 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0505 21:18:59.940381   29367 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0505 21:18:59.950882   29367 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 21:19:00.096868   29367 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0505 21:19:00.252619   29367 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0505 21:19:00.252698   29367 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0505 21:19:00.258481   29367 start.go:562] Will wait 60s for crictl version
	I0505 21:19:00.258543   29367 ssh_runner.go:195] Run: which crictl
	I0505 21:19:00.263197   29367 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0505 21:19:00.311270   29367 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0505 21:19:00.311361   29367 ssh_runner.go:195] Run: crio --version
	I0505 21:19:00.344287   29367 ssh_runner.go:195] Run: crio --version
	I0505 21:19:00.379161   29367 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0505 21:19:00.380590   29367 out.go:177]   - env NO_PROXY=192.168.39.178
	I0505 21:19:00.382104   29367 out.go:177]   - env NO_PROXY=192.168.39.178,192.168.39.228
	I0505 21:19:00.383357   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetIP
	I0505 21:19:00.386321   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:19:00.386717   29367 main.go:141] libmachine: (ha-322980-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:64:b7", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:18:47 +0000 UTC Type:0 Mac:52:54:00:c6:64:b7 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ha-322980-m03 Clientid:01:52:54:00:c6:64:b7}
	I0505 21:19:00.386750   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined IP address 192.168.39.29 and MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:19:00.386980   29367 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0505 21:19:00.392694   29367 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0505 21:19:00.408501   29367 mustload.go:65] Loading cluster: ha-322980
	I0505 21:19:00.408768   29367 config.go:182] Loaded profile config "ha-322980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 21:19:00.409091   29367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:19:00.409140   29367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:19:00.425690   29367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34965
	I0505 21:19:00.426132   29367 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:19:00.426599   29367 main.go:141] libmachine: Using API Version  1
	I0505 21:19:00.426624   29367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:19:00.426931   29367 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:19:00.427126   29367 main.go:141] libmachine: (ha-322980) Calling .GetState
	I0505 21:19:00.428655   29367 host.go:66] Checking if "ha-322980" exists ...
	I0505 21:19:00.429056   29367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:19:00.429099   29367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:19:00.444224   29367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34995
	I0505 21:19:00.444622   29367 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:19:00.445055   29367 main.go:141] libmachine: Using API Version  1
	I0505 21:19:00.445077   29367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:19:00.445418   29367 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:19:00.445650   29367 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:19:00.445811   29367 certs.go:68] Setting up /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980 for IP: 192.168.39.29
	I0505 21:19:00.445824   29367 certs.go:194] generating shared ca certs ...
	I0505 21:19:00.445840   29367 certs.go:226] acquiring lock for ca certs: {Name:mk9a8f97724855697074b08194c6247f8f9e5c84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 21:19:00.445966   29367 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key
	I0505 21:19:00.446007   29367 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key
	I0505 21:19:00.446016   29367 certs.go:256] generating profile certs ...
	I0505 21:19:00.446078   29367 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/client.key
	I0505 21:19:00.446115   29367 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key.851dd8e3
	I0505 21:19:00.446128   29367 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt.851dd8e3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.178 192.168.39.228 192.168.39.29 192.168.39.254]
	I0505 21:19:00.557007   29367 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt.851dd8e3 ...
	I0505 21:19:00.557038   29367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt.851dd8e3: {Name:mkeabfd63b086fbe6c5a694b37c05a9029ccc5ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 21:19:00.557219   29367 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key.851dd8e3 ...
	I0505 21:19:00.557237   29367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key.851dd8e3: {Name:mkcf261d94995a12f366032c627df88044d19e5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 21:19:00.557308   29367 certs.go:381] copying /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt.851dd8e3 -> /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt
	I0505 21:19:00.557425   29367 certs.go:385] copying /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key.851dd8e3 -> /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key
	I0505 21:19:00.557541   29367 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.key
	I0505 21:19:00.557556   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0505 21:19:00.557570   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0505 21:19:00.557583   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0505 21:19:00.557595   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0505 21:19:00.557607   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0505 21:19:00.557618   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0505 21:19:00.557631   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0505 21:19:00.557642   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0505 21:19:00.557689   29367 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798.pem (1338 bytes)
	W0505 21:19:00.557732   29367 certs.go:480] ignoring /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798_empty.pem, impossibly tiny 0 bytes
	I0505 21:19:00.557745   29367 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem (1675 bytes)
	I0505 21:19:00.557778   29367 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem (1078 bytes)
	I0505 21:19:00.557806   29367 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem (1123 bytes)
	I0505 21:19:00.557834   29367 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem (1675 bytes)
	I0505 21:19:00.557883   29367 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem (1708 bytes)
	I0505 21:19:00.557918   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem -> /usr/share/ca-certificates/187982.pem
	I0505 21:19:00.557937   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0505 21:19:00.557953   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798.pem -> /usr/share/ca-certificates/18798.pem
	I0505 21:19:00.557989   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:19:00.561068   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:19:00.561734   29367 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:19:00.561760   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:19:00.561951   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:19:00.562136   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:19:00.562313   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:19:00.562444   29367 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa Username:docker}
	I0505 21:19:00.639783   29367 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0505 21:19:00.646267   29367 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0505 21:19:00.659763   29367 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0505 21:19:00.665438   29367 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0505 21:19:00.677776   29367 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0505 21:19:00.682618   29367 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0505 21:19:00.694377   29367 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0505 21:19:00.699270   29367 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0505 21:19:00.710440   29367 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0505 21:19:00.715212   29367 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0505 21:19:00.726959   29367 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0505 21:19:00.733524   29367 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0505 21:19:00.745987   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0505 21:19:00.776306   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0505 21:19:00.804124   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0505 21:19:00.833539   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0505 21:19:00.860099   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0505 21:19:00.887074   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0505 21:19:00.912781   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0505 21:19:00.939713   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0505 21:19:00.966650   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem --> /usr/share/ca-certificates/187982.pem (1708 bytes)
	I0505 21:19:00.991875   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0505 21:19:01.019615   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798.pem --> /usr/share/ca-certificates/18798.pem (1338 bytes)
	I0505 21:19:01.044884   29367 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0505 21:19:01.064393   29367 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0505 21:19:01.083899   29367 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0505 21:19:01.102815   29367 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0505 21:19:01.123852   29367 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0505 21:19:01.143578   29367 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0505 21:19:01.162569   29367 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0505 21:19:01.181825   29367 ssh_runner.go:195] Run: openssl version
	I0505 21:19:01.187965   29367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/187982.pem && ln -fs /usr/share/ca-certificates/187982.pem /etc/ssl/certs/187982.pem"
	I0505 21:19:01.200357   29367 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/187982.pem
	I0505 21:19:01.205126   29367 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  5 21:11 /usr/share/ca-certificates/187982.pem
	I0505 21:19:01.205173   29367 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/187982.pem
	I0505 21:19:01.211088   29367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/187982.pem /etc/ssl/certs/3ec20f2e.0"
	I0505 21:19:01.223287   29367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0505 21:19:01.235075   29367 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0505 21:19:01.239792   29367 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  5 20:58 /usr/share/ca-certificates/minikubeCA.pem
	I0505 21:19:01.239850   29367 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0505 21:19:01.247145   29367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0505 21:19:01.262467   29367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18798.pem && ln -fs /usr/share/ca-certificates/18798.pem /etc/ssl/certs/18798.pem"
	I0505 21:19:01.275296   29367 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18798.pem
	I0505 21:19:01.280073   29367 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  5 21:11 /usr/share/ca-certificates/18798.pem
	I0505 21:19:01.280134   29367 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18798.pem
	I0505 21:19:01.286359   29367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18798.pem /etc/ssl/certs/51391683.0"
	I0505 21:19:01.298575   29367 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0505 21:19:01.303164   29367 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0505 21:19:01.303230   29367 kubeadm.go:928] updating node {m03 192.168.39.29 8443 v1.30.0 crio true true} ...
	I0505 21:19:01.303328   29367 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-322980-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.29
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-322980 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0505 21:19:01.303359   29367 kube-vip.go:111] generating kube-vip config ...
	I0505 21:19:01.303401   29367 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0505 21:19:01.321789   29367 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0505 21:19:01.321858   29367 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0505 21:19:01.321920   29367 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0505 21:19:01.334314   29367 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0505 21:19:01.334375   29367 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0505 21:19:01.345667   29367 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256
	I0505 21:19:01.345679   29367 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256
	I0505 21:19:01.345697   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/cache/linux/amd64/v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0505 21:19:01.345712   29367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 21:19:01.345667   29367 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0505 21:19:01.345780   29367 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0505 21:19:01.345809   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/cache/linux/amd64/v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0505 21:19:01.345875   29367 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0505 21:19:01.361980   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/cache/linux/amd64/v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0505 21:19:01.361996   29367 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0505 21:19:01.362026   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/cache/linux/amd64/v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0505 21:19:01.362062   29367 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0505 21:19:01.362067   29367 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0505 21:19:01.362090   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/cache/linux/amd64/v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0505 21:19:01.387238   29367 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0505 21:19:01.387273   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/cache/linux/amd64/v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0505 21:19:02.379027   29367 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0505 21:19:02.390731   29367 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0505 21:19:02.409464   29367 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0505 21:19:02.428169   29367 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0505 21:19:02.447238   29367 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0505 21:19:02.451984   29367 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0505 21:19:02.466221   29367 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 21:19:02.602089   29367 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0505 21:19:02.622092   29367 host.go:66] Checking if "ha-322980" exists ...
	I0505 21:19:02.622538   29367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:19:02.622588   29367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:19:02.639531   29367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34713
	I0505 21:19:02.639945   29367 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:19:02.640442   29367 main.go:141] libmachine: Using API Version  1
	I0505 21:19:02.640469   29367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:19:02.640781   29367 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:19:02.640976   29367 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:19:02.641134   29367 start.go:316] joinCluster: &{Name:ha-322980 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cluster
Name:ha-322980 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.178 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.29 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 21:19:02.641244   29367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0505 21:19:02.641265   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:19:02.644568   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:19:02.644993   29367 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:19:02.645018   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:19:02.645202   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:19:02.645369   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:19:02.645487   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:19:02.645593   29367 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa Username:docker}
	I0505 21:19:02.821421   29367 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.29 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0505 21:19:02.821470   29367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ere86y.rsom8095c8gt6u0e --discovery-token-ca-cert-hash sha256:6a26f18bad1f4f0bdeab00e50a185d34e4c63698b4d623f2ccf5d34207e02541 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-322980-m03 --control-plane --apiserver-advertise-address=192.168.39.29 --apiserver-bind-port=8443"
	I0505 21:19:27.235707   29367 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ere86y.rsom8095c8gt6u0e --discovery-token-ca-cert-hash sha256:6a26f18bad1f4f0bdeab00e50a185d34e4c63698b4d623f2ccf5d34207e02541 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-322980-m03 --control-plane --apiserver-advertise-address=192.168.39.29 --apiserver-bind-port=8443": (24.41421058s)
	I0505 21:19:27.235750   29367 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0505 21:19:27.795445   29367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-322980-m03 minikube.k8s.io/updated_at=2024_05_05T21_19_27_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=182cbbc99574885c654f8e32902368a71f76ddd3 minikube.k8s.io/name=ha-322980 minikube.k8s.io/primary=false
	I0505 21:19:27.943880   29367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-322980-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0505 21:19:28.087974   29367 start.go:318] duration metric: took 25.446835494s to joinCluster
	I0505 21:19:28.088051   29367 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.29 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0505 21:19:28.089332   29367 out.go:177] * Verifying Kubernetes components...
	I0505 21:19:28.090663   29367 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 21:19:28.088443   29367 config.go:182] Loaded profile config "ha-322980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 21:19:28.402321   29367 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0505 21:19:28.441042   29367 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18602-11466/kubeconfig
	I0505 21:19:28.441463   29367 kapi.go:59] client config for ha-322980: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/client.crt", KeyFile:"/home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/client.key", CAFile:"/home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02a40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0505 21:19:28.441552   29367 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.178:8443
	I0505 21:19:28.441814   29367 node_ready.go:35] waiting up to 6m0s for node "ha-322980-m03" to be "Ready" ...
	I0505 21:19:28.441906   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:28.441918   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:28.441929   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:28.441938   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:28.445385   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:28.942629   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:28.942657   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:28.942668   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:28.942673   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:28.946547   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:29.442717   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:29.442747   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:29.442758   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:29.442764   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:29.447216   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:19:29.942477   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:29.942497   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:29.942504   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:29.942508   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:29.946281   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:30.442120   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:30.442141   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:30.442148   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:30.442152   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:30.446124   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:30.446790   29367 node_ready.go:53] node "ha-322980-m03" has status "Ready":"False"
	I0505 21:19:30.942072   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:30.942097   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:30.942109   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:30.942115   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:30.963626   29367 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0505 21:19:31.442431   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:31.442457   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:31.442467   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:31.442475   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:31.446018   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:31.942496   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:31.942516   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:31.942528   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:31.942536   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:31.946384   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:32.442609   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:32.442630   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:32.442638   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:32.442643   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:32.446771   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:19:32.448345   29367 node_ready.go:53] node "ha-322980-m03" has status "Ready":"False"
	I0505 21:19:32.942947   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:32.942969   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:32.942977   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:32.942981   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:32.946462   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:33.442291   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:33.442320   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:33.442332   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:33.442339   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:33.447124   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:19:33.942912   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:33.942933   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:33.942941   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:33.942947   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:33.947532   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:19:34.442132   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:34.442163   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:34.442169   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:34.442173   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:34.447683   29367 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0505 21:19:34.448388   29367 node_ready.go:53] node "ha-322980-m03" has status "Ready":"False"
	I0505 21:19:34.942774   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:34.942797   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:34.942805   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:34.942811   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:34.946789   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:35.442514   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:35.442533   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:35.442539   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:35.442544   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:35.446342   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:35.447212   29367 node_ready.go:49] node "ha-322980-m03" has status "Ready":"True"
	I0505 21:19:35.447239   29367 node_ready.go:38] duration metric: took 7.005404581s for node "ha-322980-m03" to be "Ready" ...
	I0505 21:19:35.447252   29367 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0505 21:19:35.447326   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods
	I0505 21:19:35.447342   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:35.447352   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:35.447359   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:35.454461   29367 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0505 21:19:35.462354   29367 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-78zmw" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:35.462426   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-78zmw
	I0505 21:19:35.462435   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:35.462443   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:35.462447   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:35.466152   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:35.467069   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980
	I0505 21:19:35.467088   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:35.467097   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:35.467103   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:35.470307   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:35.470845   29367 pod_ready.go:92] pod "coredns-7db6d8ff4d-78zmw" in "kube-system" namespace has status "Ready":"True"
	I0505 21:19:35.470864   29367 pod_ready.go:81] duration metric: took 8.486217ms for pod "coredns-7db6d8ff4d-78zmw" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:35.470873   29367 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fqt45" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:35.470927   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fqt45
	I0505 21:19:35.470936   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:35.470943   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:35.470947   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:35.474030   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:35.474923   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980
	I0505 21:19:35.474946   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:35.474957   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:35.474962   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:35.478560   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:35.479299   29367 pod_ready.go:92] pod "coredns-7db6d8ff4d-fqt45" in "kube-system" namespace has status "Ready":"True"
	I0505 21:19:35.479323   29367 pod_ready.go:81] duration metric: took 8.442107ms for pod "coredns-7db6d8ff4d-fqt45" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:35.479335   29367 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-322980" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:35.479404   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/etcd-ha-322980
	I0505 21:19:35.479415   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:35.479425   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:35.479431   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:35.482559   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:35.483116   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980
	I0505 21:19:35.483129   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:35.483136   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:35.483139   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:35.486243   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:35.486789   29367 pod_ready.go:92] pod "etcd-ha-322980" in "kube-system" namespace has status "Ready":"True"
	I0505 21:19:35.486808   29367 pod_ready.go:81] duration metric: took 7.466072ms for pod "etcd-ha-322980" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:35.486818   29367 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-322980-m02" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:35.486861   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/etcd-ha-322980-m02
	I0505 21:19:35.486871   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:35.486878   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:35.486882   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:35.490279   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:35.490751   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:19:35.490768   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:35.490778   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:35.490786   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:35.494034   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:35.494650   29367 pod_ready.go:92] pod "etcd-ha-322980-m02" in "kube-system" namespace has status "Ready":"True"
	I0505 21:19:35.494665   29367 pod_ready.go:81] duration metric: took 7.842312ms for pod "etcd-ha-322980-m02" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:35.494673   29367 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-322980-m03" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:35.643103   29367 request.go:629] Waited for 148.371982ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/etcd-ha-322980-m03
	I0505 21:19:35.643189   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/etcd-ha-322980-m03
	I0505 21:19:35.643198   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:35.643206   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:35.643212   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:35.647580   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:19:35.843087   29367 request.go:629] Waited for 194.428682ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:35.843166   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:35.843174   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:35.843189   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:35.843203   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:35.846828   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:35.847895   29367 pod_ready.go:92] pod "etcd-ha-322980-m03" in "kube-system" namespace has status "Ready":"True"
	I0505 21:19:35.847918   29367 pod_ready.go:81] duration metric: took 353.238939ms for pod "etcd-ha-322980-m03" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:35.847943   29367 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-322980" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:36.043049   29367 request.go:629] Waited for 195.034663ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-322980
	I0505 21:19:36.043136   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-322980
	I0505 21:19:36.043146   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:36.043162   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:36.043175   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:36.050109   29367 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0505 21:19:36.243498   29367 request.go:629] Waited for 192.350383ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/nodes/ha-322980
	I0505 21:19:36.243561   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980
	I0505 21:19:36.243572   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:36.243582   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:36.243591   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:36.247268   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:36.248083   29367 pod_ready.go:92] pod "kube-apiserver-ha-322980" in "kube-system" namespace has status "Ready":"True"
	I0505 21:19:36.248102   29367 pod_ready.go:81] duration metric: took 400.150655ms for pod "kube-apiserver-ha-322980" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:36.248112   29367 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-322980-m02" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:36.443249   29367 request.go:629] Waited for 195.071058ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-322980-m02
	I0505 21:19:36.443320   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-322980-m02
	I0505 21:19:36.443325   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:36.443334   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:36.443341   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:36.447570   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:19:36.643595   29367 request.go:629] Waited for 195.374318ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:19:36.643682   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:19:36.643697   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:36.643712   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:36.643719   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:36.648195   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:19:36.649103   29367 pod_ready.go:92] pod "kube-apiserver-ha-322980-m02" in "kube-system" namespace has status "Ready":"True"
	I0505 21:19:36.649128   29367 pod_ready.go:81] duration metric: took 401.00883ms for pod "kube-apiserver-ha-322980-m02" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:36.649143   29367 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-322980-m03" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:36.843546   29367 request.go:629] Waited for 194.319072ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-322980-m03
	I0505 21:19:36.843609   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-322980-m03
	I0505 21:19:36.843614   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:36.843631   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:36.843637   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:36.847887   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:19:37.042736   29367 request.go:629] Waited for 194.236068ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:37.042806   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:37.042812   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:37.042819   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:37.042826   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:37.046788   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:37.242545   29367 request.go:629] Waited for 93.237949ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-322980-m03
	I0505 21:19:37.242627   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-322980-m03
	I0505 21:19:37.242634   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:37.242648   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:37.242657   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:37.246071   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:37.443574   29367 request.go:629] Waited for 196.323769ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:37.443666   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:37.443680   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:37.443692   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:37.443700   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:37.448543   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:19:37.649890   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-322980-m03
	I0505 21:19:37.649914   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:37.649925   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:37.649935   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:37.653774   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:37.843060   29367 request.go:629] Waited for 188.386721ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:37.843122   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:37.843137   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:37.843143   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:37.843147   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:37.847679   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:19:38.149417   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-322980-m03
	I0505 21:19:38.149442   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:38.149451   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:38.149456   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:38.152908   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:38.242684   29367 request.go:629] Waited for 88.923377ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:38.242731   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:38.242736   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:38.242744   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:38.242747   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:38.246536   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:38.650252   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-322980-m03
	I0505 21:19:38.650278   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:38.650289   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:38.650296   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:38.653966   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:38.655004   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:38.655022   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:38.655032   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:38.655038   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:38.659818   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:19:38.660414   29367 pod_ready.go:102] pod "kube-apiserver-ha-322980-m03" in "kube-system" namespace has status "Ready":"False"
	I0505 21:19:39.149705   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-322980-m03
	I0505 21:19:39.149730   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:39.149741   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:39.149747   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:39.153246   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:39.154213   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:39.154232   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:39.154243   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:39.154248   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:39.157387   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:39.650228   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-322980-m03
	I0505 21:19:39.650249   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:39.650257   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:39.650261   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:39.654174   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:39.655176   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:39.655199   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:39.655206   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:39.655213   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:39.658325   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:40.149455   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-322980-m03
	I0505 21:19:40.149478   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:40.149486   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:40.149492   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:40.153620   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:19:40.154589   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:40.154605   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:40.154612   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:40.154617   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:40.157755   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:40.649467   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-322980-m03
	I0505 21:19:40.649494   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:40.649502   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:40.649506   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:40.653345   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:40.654392   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:40.654411   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:40.654421   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:40.654433   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:40.657548   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:41.149908   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-322980-m03
	I0505 21:19:41.149935   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:41.149945   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:41.149953   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:41.154123   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:19:41.155133   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:41.155152   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:41.155159   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:41.155163   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:41.158195   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:41.158975   29367 pod_ready.go:102] pod "kube-apiserver-ha-322980-m03" in "kube-system" namespace has status "Ready":"False"
	I0505 21:19:41.649749   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-322980-m03
	I0505 21:19:41.649775   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:41.649787   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:41.649794   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:41.654568   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:19:41.656551   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:41.656565   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:41.656572   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:41.656577   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:41.659941   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:42.150082   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-322980-m03
	I0505 21:19:42.150105   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:42.150113   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:42.150116   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:42.153424   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:42.154599   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:42.154616   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:42.154625   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:42.154631   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:42.158203   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:42.649950   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-322980-m03
	I0505 21:19:42.649988   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:42.649996   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:42.650005   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:42.654062   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:19:42.655084   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:42.655103   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:42.655115   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:42.655121   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:42.658378   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:43.149411   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-322980-m03
	I0505 21:19:43.149435   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:43.149447   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:43.149453   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:43.152549   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:43.153483   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:43.153500   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:43.153510   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:43.153520   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:43.156274   29367 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 21:19:43.156887   29367 pod_ready.go:92] pod "kube-apiserver-ha-322980-m03" in "kube-system" namespace has status "Ready":"True"
	I0505 21:19:43.156905   29367 pod_ready.go:81] duration metric: took 6.507754855s for pod "kube-apiserver-ha-322980-m03" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:43.156914   29367 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-322980" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:43.156962   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-322980
	I0505 21:19:43.156970   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:43.156977   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:43.156982   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:43.159900   29367 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 21:19:43.160433   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980
	I0505 21:19:43.160447   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:43.160454   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:43.160458   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:43.163045   29367 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 21:19:43.163577   29367 pod_ready.go:92] pod "kube-controller-manager-ha-322980" in "kube-system" namespace has status "Ready":"True"
	I0505 21:19:43.163597   29367 pod_ready.go:81] duration metric: took 6.675601ms for pod "kube-controller-manager-ha-322980" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:43.163609   29367 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-322980-m02" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:43.163674   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-322980-m02
	I0505 21:19:43.163685   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:43.163697   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:43.163704   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:43.167101   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:43.167760   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:19:43.167774   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:43.167781   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:43.167786   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:43.170373   29367 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 21:19:43.171104   29367 pod_ready.go:92] pod "kube-controller-manager-ha-322980-m02" in "kube-system" namespace has status "Ready":"True"
	I0505 21:19:43.171122   29367 pod_ready.go:81] duration metric: took 7.503084ms for pod "kube-controller-manager-ha-322980-m02" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:43.171131   29367 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-322980-m03" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:43.243221   29367 request.go:629] Waited for 72.041665ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-322980-m03
	I0505 21:19:43.243279   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-322980-m03
	I0505 21:19:43.243284   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:43.243296   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:43.243300   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:43.246923   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:43.443103   29367 request.go:629] Waited for 195.403606ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:43.443188   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:43.443194   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:43.443201   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:43.443206   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:43.447489   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:19:43.448483   29367 pod_ready.go:92] pod "kube-controller-manager-ha-322980-m03" in "kube-system" namespace has status "Ready":"True"
	I0505 21:19:43.448501   29367 pod_ready.go:81] duration metric: took 277.36467ms for pod "kube-controller-manager-ha-322980-m03" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:43.448511   29367 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8xdzd" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:43.642757   29367 request.go:629] Waited for 194.191312ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8xdzd
	I0505 21:19:43.642848   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8xdzd
	I0505 21:19:43.642856   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:43.642871   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:43.642881   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:43.646980   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:19:43.842881   29367 request.go:629] Waited for 195.087599ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/nodes/ha-322980
	I0505 21:19:43.842936   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980
	I0505 21:19:43.842945   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:43.842957   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:43.842965   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:43.846087   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:43.846720   29367 pod_ready.go:92] pod "kube-proxy-8xdzd" in "kube-system" namespace has status "Ready":"True"
	I0505 21:19:43.846735   29367 pod_ready.go:81] duration metric: took 398.218356ms for pod "kube-proxy-8xdzd" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:43.846744   29367 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nqq6b" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:44.042858   29367 request.go:629] Waited for 196.051735ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nqq6b
	I0505 21:19:44.042957   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nqq6b
	I0505 21:19:44.042970   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:44.042980   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:44.042986   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:44.046927   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:44.243099   29367 request.go:629] Waited for 195.356238ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:44.243181   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:44.243188   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:44.243195   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:44.243199   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:44.246854   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:44.247640   29367 pod_ready.go:92] pod "kube-proxy-nqq6b" in "kube-system" namespace has status "Ready":"True"
	I0505 21:19:44.247658   29367 pod_ready.go:81] duration metric: took 400.907383ms for pod "kube-proxy-nqq6b" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:44.247679   29367 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wbf7q" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:44.442804   29367 request.go:629] Waited for 195.070743ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wbf7q
	I0505 21:19:44.442860   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wbf7q
	I0505 21:19:44.442865   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:44.442872   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:44.442876   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:44.446754   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:44.643247   29367 request.go:629] Waited for 195.334258ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:19:44.643307   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:19:44.643318   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:44.643329   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:44.643336   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:44.647623   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:19:44.648755   29367 pod_ready.go:92] pod "kube-proxy-wbf7q" in "kube-system" namespace has status "Ready":"True"
	I0505 21:19:44.648774   29367 pod_ready.go:81] duration metric: took 401.089611ms for pod "kube-proxy-wbf7q" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:44.648784   29367 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-322980" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:44.842799   29367 request.go:629] Waited for 193.905816ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-322980
	I0505 21:19:44.842868   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-322980
	I0505 21:19:44.842874   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:44.842881   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:44.842886   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:44.846964   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:19:45.043128   29367 request.go:629] Waited for 195.357501ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/nodes/ha-322980
	I0505 21:19:45.043183   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980
	I0505 21:19:45.043190   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:45.043201   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:45.043208   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:45.047472   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:19:45.048467   29367 pod_ready.go:92] pod "kube-scheduler-ha-322980" in "kube-system" namespace has status "Ready":"True"
	I0505 21:19:45.048485   29367 pod_ready.go:81] duration metric: took 399.695996ms for pod "kube-scheduler-ha-322980" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:45.048496   29367 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-322980-m02" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:45.242547   29367 request.go:629] Waited for 193.994855ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-322980-m02
	I0505 21:19:45.242599   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-322980-m02
	I0505 21:19:45.242604   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:45.242611   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:45.242615   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:45.246122   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:45.443521   29367 request.go:629] Waited for 196.571897ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:19:45.443576   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:19:45.443582   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:45.443589   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:45.443596   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:45.447402   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:45.448284   29367 pod_ready.go:92] pod "kube-scheduler-ha-322980-m02" in "kube-system" namespace has status "Ready":"True"
	I0505 21:19:45.448304   29367 pod_ready.go:81] duration metric: took 399.802534ms for pod "kube-scheduler-ha-322980-m02" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:45.448314   29367 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-322980-m03" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:45.643553   29367 request.go:629] Waited for 195.18216ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-322980-m03
	I0505 21:19:45.643642   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-322980-m03
	I0505 21:19:45.643660   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:45.643675   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:45.643685   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:45.647166   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:45.842567   29367 request.go:629] Waited for 194.47774ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:45.842660   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:45.842668   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:45.842680   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:45.842686   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:45.846121   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:45.846857   29367 pod_ready.go:92] pod "kube-scheduler-ha-322980-m03" in "kube-system" namespace has status "Ready":"True"
	I0505 21:19:45.846880   29367 pod_ready.go:81] duration metric: took 398.558422ms for pod "kube-scheduler-ha-322980-m03" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:45.846894   29367 pod_ready.go:38] duration metric: took 10.399629772s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0505 21:19:45.846922   29367 api_server.go:52] waiting for apiserver process to appear ...
	I0505 21:19:45.846990   29367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 21:19:45.865989   29367 api_server.go:72] duration metric: took 17.77790312s to wait for apiserver process to appear ...
	I0505 21:19:45.866011   29367 api_server.go:88] waiting for apiserver healthz status ...
	I0505 21:19:45.866032   29367 api_server.go:253] Checking apiserver healthz at https://192.168.39.178:8443/healthz ...
	I0505 21:19:45.872618   29367 api_server.go:279] https://192.168.39.178:8443/healthz returned 200:
	ok
	I0505 21:19:45.872680   29367 round_trippers.go:463] GET https://192.168.39.178:8443/version
	I0505 21:19:45.872703   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:45.872713   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:45.872721   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:45.873554   29367 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0505 21:19:45.873605   29367 api_server.go:141] control plane version: v1.30.0
	I0505 21:19:45.873618   29367 api_server.go:131] duration metric: took 7.601764ms to wait for apiserver health ...
	I0505 21:19:45.873626   29367 system_pods.go:43] waiting for kube-system pods to appear ...
	I0505 21:19:46.043033   29367 request.go:629] Waited for 169.347897ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods
	I0505 21:19:46.043114   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods
	I0505 21:19:46.043129   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:46.043140   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:46.043150   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:46.051571   29367 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0505 21:19:46.059684   29367 system_pods.go:59] 24 kube-system pods found
	I0505 21:19:46.059712   29367 system_pods.go:61] "coredns-7db6d8ff4d-78zmw" [e066e3ad-0574-44f9-acab-d7cec8b86788] Running
	I0505 21:19:46.059717   29367 system_pods.go:61] "coredns-7db6d8ff4d-fqt45" [27bdadca-f49c-4f50-b09c-07dd6067f39a] Running
	I0505 21:19:46.059721   29367 system_pods.go:61] "etcd-ha-322980" [71d18dfd-9116-470d-bc5b-808adc8d2440] Running
	I0505 21:19:46.059725   29367 system_pods.go:61] "etcd-ha-322980-m02" [d38daa43-9f21-433e-8f83-67d492e7eb6f] Running
	I0505 21:19:46.059728   29367 system_pods.go:61] "etcd-ha-322980-m03" [15754f58-e7a0-4f74-b448-d1b628a32678] Running
	I0505 21:19:46.059731   29367 system_pods.go:61] "kindnet-ks55j" [d7afae98-1d61-43b1-ac25-c085e289db4d] Running
	I0505 21:19:46.059734   29367 system_pods.go:61] "kindnet-lmgkm" [78b6e816-d020-4105-b15c-3142323a6627] Running
	I0505 21:19:46.059736   29367 system_pods.go:61] "kindnet-lwtnx" [4033535e-69f1-426c-bb17-831fad6336d5] Running
	I0505 21:19:46.059741   29367 system_pods.go:61] "kube-apiserver-ha-322980" [feaf1c0e-9d36-499a-861f-e92298c928b8] Running
	I0505 21:19:46.059744   29367 system_pods.go:61] "kube-apiserver-ha-322980-m02" [b5d35b3b-53cb-4dc7-9ecd-3cea1362ac0e] Running
	I0505 21:19:46.059747   29367 system_pods.go:61] "kube-apiserver-ha-322980-m03" [575db24d-e297-4995-903b-34d0c3a2a268] Running
	I0505 21:19:46.059751   29367 system_pods.go:61] "kube-controller-manager-ha-322980" [e7bf9f5c-70c2-46a5-bb4d-861a17fd3d64] Running
	I0505 21:19:46.059754   29367 system_pods.go:61] "kube-controller-manager-ha-322980-m02" [7a85d624-043e-4429-afc4-831a59c6f349] Running
	I0505 21:19:46.059757   29367 system_pods.go:61] "kube-controller-manager-ha-322980-m03" [acdc19e3-d12c-4c23-86f0-b10845b406ce] Running
	I0505 21:19:46.059760   29367 system_pods.go:61] "kube-proxy-8xdzd" [d0b6492d-c0f5-45dd-8482-c447b81daa66] Running
	I0505 21:19:46.059763   29367 system_pods.go:61] "kube-proxy-nqq6b" [73c9f1e1-7917-43ec-8876-e6f4280ecad3] Running
	I0505 21:19:46.059767   29367 system_pods.go:61] "kube-proxy-wbf7q" [ae43ac77-d16b-4f36-8c27-23d1cf0431e3] Running
	I0505 21:19:46.059772   29367 system_pods.go:61] "kube-scheduler-ha-322980" [b77aa3a9-f911-42e0-9c83-91ae75a0424c] Running
	I0505 21:19:46.059775   29367 system_pods.go:61] "kube-scheduler-ha-322980-m02" [d011a166-d283-4933-a48f-eb36000d04a1] Running
	I0505 21:19:46.059778   29367 system_pods.go:61] "kube-scheduler-ha-322980-m03" [15c200c1-1945-43fa-87c7-900bb219da1d] Running
	I0505 21:19:46.059784   29367 system_pods.go:61] "kube-vip-ha-322980" [8743dbcc-49f9-46e8-8088-cd5020429c08] Running
	I0505 21:19:46.059787   29367 system_pods.go:61] "kube-vip-ha-322980-m02" [3cacc089-9d3d-4da6-9bce-0777ffe737d1] Running
	I0505 21:19:46.059790   29367 system_pods.go:61] "kube-vip-ha-322980-m03" [5083810a-dbf0-4a5f-9006-02673bc8d1c7] Running
	I0505 21:19:46.059793   29367 system_pods.go:61] "storage-provisioner" [bc212ac3-7499-4edc-b5a5-622b0bd4a891] Running
	I0505 21:19:46.059798   29367 system_pods.go:74] duration metric: took 186.165526ms to wait for pod list to return data ...
	I0505 21:19:46.059809   29367 default_sa.go:34] waiting for default service account to be created ...
	I0505 21:19:46.242614   29367 request.go:629] Waited for 182.734312ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/namespaces/default/serviceaccounts
	I0505 21:19:46.242670   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/default/serviceaccounts
	I0505 21:19:46.242679   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:46.242687   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:46.242691   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:46.246676   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:46.246923   29367 default_sa.go:45] found service account: "default"
	I0505 21:19:46.246946   29367 default_sa.go:55] duration metric: took 187.130677ms for default service account to be created ...
	I0505 21:19:46.246957   29367 system_pods.go:116] waiting for k8s-apps to be running ...
	I0505 21:19:46.443278   29367 request.go:629] Waited for 196.260889ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods
	I0505 21:19:46.443331   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods
	I0505 21:19:46.443336   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:46.443343   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:46.443347   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:46.450152   29367 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0505 21:19:46.457066   29367 system_pods.go:86] 24 kube-system pods found
	I0505 21:19:46.457093   29367 system_pods.go:89] "coredns-7db6d8ff4d-78zmw" [e066e3ad-0574-44f9-acab-d7cec8b86788] Running
	I0505 21:19:46.457101   29367 system_pods.go:89] "coredns-7db6d8ff4d-fqt45" [27bdadca-f49c-4f50-b09c-07dd6067f39a] Running
	I0505 21:19:46.457107   29367 system_pods.go:89] "etcd-ha-322980" [71d18dfd-9116-470d-bc5b-808adc8d2440] Running
	I0505 21:19:46.457113   29367 system_pods.go:89] "etcd-ha-322980-m02" [d38daa43-9f21-433e-8f83-67d492e7eb6f] Running
	I0505 21:19:46.457119   29367 system_pods.go:89] "etcd-ha-322980-m03" [15754f58-e7a0-4f74-b448-d1b628a32678] Running
	I0505 21:19:46.457125   29367 system_pods.go:89] "kindnet-ks55j" [d7afae98-1d61-43b1-ac25-c085e289db4d] Running
	I0505 21:19:46.457131   29367 system_pods.go:89] "kindnet-lmgkm" [78b6e816-d020-4105-b15c-3142323a6627] Running
	I0505 21:19:46.457137   29367 system_pods.go:89] "kindnet-lwtnx" [4033535e-69f1-426c-bb17-831fad6336d5] Running
	I0505 21:19:46.457144   29367 system_pods.go:89] "kube-apiserver-ha-322980" [feaf1c0e-9d36-499a-861f-e92298c928b8] Running
	I0505 21:19:46.457154   29367 system_pods.go:89] "kube-apiserver-ha-322980-m02" [b5d35b3b-53cb-4dc7-9ecd-3cea1362ac0e] Running
	I0505 21:19:46.457160   29367 system_pods.go:89] "kube-apiserver-ha-322980-m03" [575db24d-e297-4995-903b-34d0c3a2a268] Running
	I0505 21:19:46.457168   29367 system_pods.go:89] "kube-controller-manager-ha-322980" [e7bf9f5c-70c2-46a5-bb4d-861a17fd3d64] Running
	I0505 21:19:46.457176   29367 system_pods.go:89] "kube-controller-manager-ha-322980-m02" [7a85d624-043e-4429-afc4-831a59c6f349] Running
	I0505 21:19:46.457186   29367 system_pods.go:89] "kube-controller-manager-ha-322980-m03" [acdc19e3-d12c-4c23-86f0-b10845b406ce] Running
	I0505 21:19:46.457194   29367 system_pods.go:89] "kube-proxy-8xdzd" [d0b6492d-c0f5-45dd-8482-c447b81daa66] Running
	I0505 21:19:46.457214   29367 system_pods.go:89] "kube-proxy-nqq6b" [73c9f1e1-7917-43ec-8876-e6f4280ecad3] Running
	I0505 21:19:46.457222   29367 system_pods.go:89] "kube-proxy-wbf7q" [ae43ac77-d16b-4f36-8c27-23d1cf0431e3] Running
	I0505 21:19:46.457232   29367 system_pods.go:89] "kube-scheduler-ha-322980" [b77aa3a9-f911-42e0-9c83-91ae75a0424c] Running
	I0505 21:19:46.457239   29367 system_pods.go:89] "kube-scheduler-ha-322980-m02" [d011a166-d283-4933-a48f-eb36000d04a1] Running
	I0505 21:19:46.457249   29367 system_pods.go:89] "kube-scheduler-ha-322980-m03" [15c200c1-1945-43fa-87c7-900bb219da1d] Running
	I0505 21:19:46.457256   29367 system_pods.go:89] "kube-vip-ha-322980" [8743dbcc-49f9-46e8-8088-cd5020429c08] Running
	I0505 21:19:46.457265   29367 system_pods.go:89] "kube-vip-ha-322980-m02" [3cacc089-9d3d-4da6-9bce-0777ffe737d1] Running
	I0505 21:19:46.457274   29367 system_pods.go:89] "kube-vip-ha-322980-m03" [5083810a-dbf0-4a5f-9006-02673bc8d1c7] Running
	I0505 21:19:46.457282   29367 system_pods.go:89] "storage-provisioner" [bc212ac3-7499-4edc-b5a5-622b0bd4a891] Running
	I0505 21:19:46.457292   29367 system_pods.go:126] duration metric: took 210.328387ms to wait for k8s-apps to be running ...
	I0505 21:19:46.457305   29367 system_svc.go:44] waiting for kubelet service to be running ....
	I0505 21:19:46.457355   29367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 21:19:46.475862   29367 system_svc.go:56] duration metric: took 18.552221ms WaitForService to wait for kubelet
	I0505 21:19:46.475887   29367 kubeadm.go:576] duration metric: took 18.38780276s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0505 21:19:46.475909   29367 node_conditions.go:102] verifying NodePressure condition ...
	I0505 21:19:46.643292   29367 request.go:629] Waited for 167.315134ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/nodes
	I0505 21:19:46.643352   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes
	I0505 21:19:46.643357   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:46.643364   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:46.643368   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:46.647544   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:19:46.648876   29367 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0505 21:19:46.648897   29367 node_conditions.go:123] node cpu capacity is 2
	I0505 21:19:46.648908   29367 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0505 21:19:46.648912   29367 node_conditions.go:123] node cpu capacity is 2
	I0505 21:19:46.648916   29367 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0505 21:19:46.648919   29367 node_conditions.go:123] node cpu capacity is 2
	I0505 21:19:46.648923   29367 node_conditions.go:105] duration metric: took 173.008596ms to run NodePressure ...
	I0505 21:19:46.648937   29367 start.go:240] waiting for startup goroutines ...
	I0505 21:19:46.648959   29367 start.go:254] writing updated cluster config ...
	I0505 21:19:46.649219   29367 ssh_runner.go:195] Run: rm -f paused
	I0505 21:19:46.698818   29367 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0505 21:19:46.700817   29367 out.go:177] * Done! kubectl is now configured to use "ha-322980" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	May 05 21:23:48 ha-322980 crio[687]: time="2024-05-05 21:23:48.899310369Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714944228899284282,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c1babdc8-2842-40dd-80cb-b56253da48af name=/runtime.v1.ImageService/ImageFsInfo
	May 05 21:23:48 ha-322980 crio[687]: time="2024-05-05 21:23:48.900034234Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=783d56b3-0237-4a4a-8313-ba790b31ee5a name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:23:48 ha-322980 crio[687]: time="2024-05-05 21:23:48.900093729Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=783d56b3-0237-4a4a-8313-ba790b31ee5a name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:23:48 ha-322980 crio[687]: time="2024-05-05 21:23:48.901133940Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d9743f3da0de5672bc067b03d1bf5a1bd2b516c0135ee81ae43c0d2cab9bcfdf,PodSandboxId:238b5b24a572eba24b52bf72fafa80a3a1105acc4ba0c3e58b255a5c6a8a7bc2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714943992554925716,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xt9l5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bbde9685-4494-40b7-bd53-9452fd970f5a,},Annotations:map[string]string{io.kubernetes.container.hash: ce6d6b7c,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e065fafa4b7aad342c5755ddbb2c69c3b4bd0efd44c9ce792388bc6c6f06121d,PodSandboxId:9f56aff0e5f86dabc75e935bba2e8f81a5f7d30f2613ed055fe026fab10ff612,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714943788390066842,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-78zmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e066e3ad-0574-44f9-acab-d7cec8b86788,},Annotations:map[string]string{io.kubernetes.container.hash: 3cdac550,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b360d142570dfeb8a0253680ccfd97861e90b700b1ce9e2e5b315244aed2a3b,PodSandboxId:cd560b1055b35f9f6500b5eef53e7f8250f4bc20a4f8e6562e8b30bff6235e38,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714943788394468442,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fqt45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
27bdadca-f49c-4f50-b09c-07dd6067f39a,},Annotations:map[string]string{io.kubernetes.container.hash: 8fa26f22,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63d1d40ce592576b3c3adab70629f977f025cc822b6fc2638f0afd5a8034b355,PodSandboxId:bca34597f1572fc75baa0a8f2853d06c3aa8e5aa575d8f5b15abb16332e61951,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1714943788301282386,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc212ac3-7499-4edc-b5a5-622b0bd4a891,},Annotations:map[string]string{io.kubernetes.container.hash: c207764,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57151a6a532bef4496bb9b3b51447c0e86324af7644760046c4f986f9bc000e6,PodSandboxId:1a6fd410f5e049e20c5d538edaa138d7e911b22709b53f7f82ef8a2c18cdc5c4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:171494378
6261169619,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lwtnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4033535e-69f1-426c-bb17-831fad6336d5,},Annotations:map[string]string{io.kubernetes.container.hash: 5400a271,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4da23c67204614119fd12bb11bb2dcc384609fe399a32f06e2ca61ff52a8438c,PodSandboxId:8b3a42343ade0d10dbd7caf52ae4583f67790d020c606261ab52cb7ceda9cd0a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714943786049309122,Labels:map[string]string
{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xdzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0b6492d-c0f5-45dd-8482-c447b81daa66,},Annotations:map[string]string{io.kubernetes.container.hash: 9b4684e6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abf4aae19a40108f61080f90924e47cd17198b595de08afa53c852acb001992f,PodSandboxId:f40e5905346ce4899eeb30b9f2633d4b891261ff9924be9e4b68d323a59bb10b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714943768245843963,Labels:map[string]string{i
o.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6f2a36965b5efe68085ec9ac39d075,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d73ef383ce1ab83dd2b2ada78891a4aa3835de2b4805157ac3e15584d0b4b29b,PodSandboxId:913466e1710aa2d1fa8e45f17fd08f82c114b7242bff697822c1bf5d2e0b7a3c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714943765197348186,Labels:map[string]string{io.kubernetes.container.name: kube
-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c588feae7d6204945d27bedaf4541d64,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b13d21aa2e8e79d9186e7d57f6c9dbcafdde76053f5205ee5d1bb46c65960d4f,PodSandboxId:82abab5bb480d2f5d83a105d70b76f2c29612396a2bf9abc6a4fd61fe5261c21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714943765127511026,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kuberne
tes.pod.name: kube-apiserver-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25cdcec1c37ba86157b0b42297dfe2cf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf39325,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ebcc8c1017ed40ef18eba191c6a6df12e34304aa025d65f0340c08e107ac43d,PodSandboxId:b3b0a14099e308d6e2e5766fe30514d082beb487575e5dbaa3e35778de907dd3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714943765026808480,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes
.pod.name: kube-controller-manager-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 578ccf60a9d00c195d5069c63fb0b319,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97769959b22d6e891106c90e8ae981d40cb02ac960e23bdfb007b3c50d50c923,PodSandboxId:01d81d8dc3bcbbd11e9a335f2a7aee378da1d273b127ffeb0f72b10080cb6bab,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714943765081603384,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-322980,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58f12977082107510fdbb696cd218155,},Annotations:map[string]string{io.kubernetes.container.hash: a7c6c285,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=783d56b3-0237-4a4a-8313-ba790b31ee5a name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:23:48 ha-322980 crio[687]: time="2024-05-05 21:23:48.950014734Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3118e031-ce93-4596-8f20-e91e362e304a name=/runtime.v1.RuntimeService/Version
	May 05 21:23:48 ha-322980 crio[687]: time="2024-05-05 21:23:48.950096213Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3118e031-ce93-4596-8f20-e91e362e304a name=/runtime.v1.RuntimeService/Version
	May 05 21:23:48 ha-322980 crio[687]: time="2024-05-05 21:23:48.951298412Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3dcb078b-660b-42c2-863e-c03b5ac9352a name=/runtime.v1.ImageService/ImageFsInfo
	May 05 21:23:48 ha-322980 crio[687]: time="2024-05-05 21:23:48.952117451Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714944228952089942,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3dcb078b-660b-42c2-863e-c03b5ac9352a name=/runtime.v1.ImageService/ImageFsInfo
	May 05 21:23:48 ha-322980 crio[687]: time="2024-05-05 21:23:48.952828337Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eea9634d-d421-4562-821c-12b19a16cb07 name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:23:48 ha-322980 crio[687]: time="2024-05-05 21:23:48.952889312Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eea9634d-d421-4562-821c-12b19a16cb07 name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:23:48 ha-322980 crio[687]: time="2024-05-05 21:23:48.953121064Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d9743f3da0de5672bc067b03d1bf5a1bd2b516c0135ee81ae43c0d2cab9bcfdf,PodSandboxId:238b5b24a572eba24b52bf72fafa80a3a1105acc4ba0c3e58b255a5c6a8a7bc2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714943992554925716,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xt9l5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bbde9685-4494-40b7-bd53-9452fd970f5a,},Annotations:map[string]string{io.kubernetes.container.hash: ce6d6b7c,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e065fafa4b7aad342c5755ddbb2c69c3b4bd0efd44c9ce792388bc6c6f06121d,PodSandboxId:9f56aff0e5f86dabc75e935bba2e8f81a5f7d30f2613ed055fe026fab10ff612,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714943788390066842,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-78zmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e066e3ad-0574-44f9-acab-d7cec8b86788,},Annotations:map[string]string{io.kubernetes.container.hash: 3cdac550,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b360d142570dfeb8a0253680ccfd97861e90b700b1ce9e2e5b315244aed2a3b,PodSandboxId:cd560b1055b35f9f6500b5eef53e7f8250f4bc20a4f8e6562e8b30bff6235e38,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714943788394468442,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fqt45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
27bdadca-f49c-4f50-b09c-07dd6067f39a,},Annotations:map[string]string{io.kubernetes.container.hash: 8fa26f22,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63d1d40ce592576b3c3adab70629f977f025cc822b6fc2638f0afd5a8034b355,PodSandboxId:bca34597f1572fc75baa0a8f2853d06c3aa8e5aa575d8f5b15abb16332e61951,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1714943788301282386,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc212ac3-7499-4edc-b5a5-622b0bd4a891,},Annotations:map[string]string{io.kubernetes.container.hash: c207764,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57151a6a532bef4496bb9b3b51447c0e86324af7644760046c4f986f9bc000e6,PodSandboxId:1a6fd410f5e049e20c5d538edaa138d7e911b22709b53f7f82ef8a2c18cdc5c4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:171494378
6261169619,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lwtnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4033535e-69f1-426c-bb17-831fad6336d5,},Annotations:map[string]string{io.kubernetes.container.hash: 5400a271,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4da23c67204614119fd12bb11bb2dcc384609fe399a32f06e2ca61ff52a8438c,PodSandboxId:8b3a42343ade0d10dbd7caf52ae4583f67790d020c606261ab52cb7ceda9cd0a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714943786049309122,Labels:map[string]string
{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xdzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0b6492d-c0f5-45dd-8482-c447b81daa66,},Annotations:map[string]string{io.kubernetes.container.hash: 9b4684e6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abf4aae19a40108f61080f90924e47cd17198b595de08afa53c852acb001992f,PodSandboxId:f40e5905346ce4899eeb30b9f2633d4b891261ff9924be9e4b68d323a59bb10b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714943768245843963,Labels:map[string]string{i
o.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6f2a36965b5efe68085ec9ac39d075,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d73ef383ce1ab83dd2b2ada78891a4aa3835de2b4805157ac3e15584d0b4b29b,PodSandboxId:913466e1710aa2d1fa8e45f17fd08f82c114b7242bff697822c1bf5d2e0b7a3c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714943765197348186,Labels:map[string]string{io.kubernetes.container.name: kube
-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c588feae7d6204945d27bedaf4541d64,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b13d21aa2e8e79d9186e7d57f6c9dbcafdde76053f5205ee5d1bb46c65960d4f,PodSandboxId:82abab5bb480d2f5d83a105d70b76f2c29612396a2bf9abc6a4fd61fe5261c21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714943765127511026,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kuberne
tes.pod.name: kube-apiserver-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25cdcec1c37ba86157b0b42297dfe2cf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf39325,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ebcc8c1017ed40ef18eba191c6a6df12e34304aa025d65f0340c08e107ac43d,PodSandboxId:b3b0a14099e308d6e2e5766fe30514d082beb487575e5dbaa3e35778de907dd3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714943765026808480,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes
.pod.name: kube-controller-manager-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 578ccf60a9d00c195d5069c63fb0b319,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97769959b22d6e891106c90e8ae981d40cb02ac960e23bdfb007b3c50d50c923,PodSandboxId:01d81d8dc3bcbbd11e9a335f2a7aee378da1d273b127ffeb0f72b10080cb6bab,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714943765081603384,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-322980,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58f12977082107510fdbb696cd218155,},Annotations:map[string]string{io.kubernetes.container.hash: a7c6c285,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eea9634d-d421-4562-821c-12b19a16cb07 name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:23:49 ha-322980 crio[687]: time="2024-05-05 21:23:49.005117555Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=60b3154f-3d68-4c46-a1ca-6b2a62d8afab name=/runtime.v1.RuntimeService/Version
	May 05 21:23:49 ha-322980 crio[687]: time="2024-05-05 21:23:49.005186269Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=60b3154f-3d68-4c46-a1ca-6b2a62d8afab name=/runtime.v1.RuntimeService/Version
	May 05 21:23:49 ha-322980 crio[687]: time="2024-05-05 21:23:49.006540916Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=59773d67-a629-4f3a-834a-fa0af822fa8d name=/runtime.v1.ImageService/ImageFsInfo
	May 05 21:23:49 ha-322980 crio[687]: time="2024-05-05 21:23:49.007242581Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714944229007217799,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=59773d67-a629-4f3a-834a-fa0af822fa8d name=/runtime.v1.ImageService/ImageFsInfo
	May 05 21:23:49 ha-322980 crio[687]: time="2024-05-05 21:23:49.008129370Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7cc831bd-d656-4d3b-b622-58e18d8fca7f name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:23:49 ha-322980 crio[687]: time="2024-05-05 21:23:49.008181687Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7cc831bd-d656-4d3b-b622-58e18d8fca7f name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:23:49 ha-322980 crio[687]: time="2024-05-05 21:23:49.008402497Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d9743f3da0de5672bc067b03d1bf5a1bd2b516c0135ee81ae43c0d2cab9bcfdf,PodSandboxId:238b5b24a572eba24b52bf72fafa80a3a1105acc4ba0c3e58b255a5c6a8a7bc2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714943992554925716,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xt9l5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bbde9685-4494-40b7-bd53-9452fd970f5a,},Annotations:map[string]string{io.kubernetes.container.hash: ce6d6b7c,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e065fafa4b7aad342c5755ddbb2c69c3b4bd0efd44c9ce792388bc6c6f06121d,PodSandboxId:9f56aff0e5f86dabc75e935bba2e8f81a5f7d30f2613ed055fe026fab10ff612,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714943788390066842,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-78zmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e066e3ad-0574-44f9-acab-d7cec8b86788,},Annotations:map[string]string{io.kubernetes.container.hash: 3cdac550,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b360d142570dfeb8a0253680ccfd97861e90b700b1ce9e2e5b315244aed2a3b,PodSandboxId:cd560b1055b35f9f6500b5eef53e7f8250f4bc20a4f8e6562e8b30bff6235e38,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714943788394468442,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fqt45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
27bdadca-f49c-4f50-b09c-07dd6067f39a,},Annotations:map[string]string{io.kubernetes.container.hash: 8fa26f22,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63d1d40ce592576b3c3adab70629f977f025cc822b6fc2638f0afd5a8034b355,PodSandboxId:bca34597f1572fc75baa0a8f2853d06c3aa8e5aa575d8f5b15abb16332e61951,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1714943788301282386,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc212ac3-7499-4edc-b5a5-622b0bd4a891,},Annotations:map[string]string{io.kubernetes.container.hash: c207764,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57151a6a532bef4496bb9b3b51447c0e86324af7644760046c4f986f9bc000e6,PodSandboxId:1a6fd410f5e049e20c5d538edaa138d7e911b22709b53f7f82ef8a2c18cdc5c4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:171494378
6261169619,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lwtnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4033535e-69f1-426c-bb17-831fad6336d5,},Annotations:map[string]string{io.kubernetes.container.hash: 5400a271,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4da23c67204614119fd12bb11bb2dcc384609fe399a32f06e2ca61ff52a8438c,PodSandboxId:8b3a42343ade0d10dbd7caf52ae4583f67790d020c606261ab52cb7ceda9cd0a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714943786049309122,Labels:map[string]string
{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xdzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0b6492d-c0f5-45dd-8482-c447b81daa66,},Annotations:map[string]string{io.kubernetes.container.hash: 9b4684e6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abf4aae19a40108f61080f90924e47cd17198b595de08afa53c852acb001992f,PodSandboxId:f40e5905346ce4899eeb30b9f2633d4b891261ff9924be9e4b68d323a59bb10b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714943768245843963,Labels:map[string]string{i
o.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6f2a36965b5efe68085ec9ac39d075,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d73ef383ce1ab83dd2b2ada78891a4aa3835de2b4805157ac3e15584d0b4b29b,PodSandboxId:913466e1710aa2d1fa8e45f17fd08f82c114b7242bff697822c1bf5d2e0b7a3c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714943765197348186,Labels:map[string]string{io.kubernetes.container.name: kube
-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c588feae7d6204945d27bedaf4541d64,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b13d21aa2e8e79d9186e7d57f6c9dbcafdde76053f5205ee5d1bb46c65960d4f,PodSandboxId:82abab5bb480d2f5d83a105d70b76f2c29612396a2bf9abc6a4fd61fe5261c21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714943765127511026,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kuberne
tes.pod.name: kube-apiserver-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25cdcec1c37ba86157b0b42297dfe2cf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf39325,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ebcc8c1017ed40ef18eba191c6a6df12e34304aa025d65f0340c08e107ac43d,PodSandboxId:b3b0a14099e308d6e2e5766fe30514d082beb487575e5dbaa3e35778de907dd3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714943765026808480,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes
.pod.name: kube-controller-manager-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 578ccf60a9d00c195d5069c63fb0b319,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97769959b22d6e891106c90e8ae981d40cb02ac960e23bdfb007b3c50d50c923,PodSandboxId:01d81d8dc3bcbbd11e9a335f2a7aee378da1d273b127ffeb0f72b10080cb6bab,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714943765081603384,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-322980,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58f12977082107510fdbb696cd218155,},Annotations:map[string]string{io.kubernetes.container.hash: a7c6c285,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7cc831bd-d656-4d3b-b622-58e18d8fca7f name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:23:49 ha-322980 crio[687]: time="2024-05-05 21:23:49.053265664Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=490d045f-96f8-49df-99bb-4516cb1de63e name=/runtime.v1.RuntimeService/Version
	May 05 21:23:49 ha-322980 crio[687]: time="2024-05-05 21:23:49.053367689Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=490d045f-96f8-49df-99bb-4516cb1de63e name=/runtime.v1.RuntimeService/Version
	May 05 21:23:49 ha-322980 crio[687]: time="2024-05-05 21:23:49.054956576Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cc2d2606-92cc-4dff-8796-053b9fd1d76c name=/runtime.v1.ImageService/ImageFsInfo
	May 05 21:23:49 ha-322980 crio[687]: time="2024-05-05 21:23:49.056645850Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714944229056621209,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cc2d2606-92cc-4dff-8796-053b9fd1d76c name=/runtime.v1.ImageService/ImageFsInfo
	May 05 21:23:49 ha-322980 crio[687]: time="2024-05-05 21:23:49.057669684Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ae0f4d23-894b-4cd8-b660-e8400b23ae75 name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:23:49 ha-322980 crio[687]: time="2024-05-05 21:23:49.057734512Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ae0f4d23-894b-4cd8-b660-e8400b23ae75 name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:23:49 ha-322980 crio[687]: time="2024-05-05 21:23:49.058061525Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d9743f3da0de5672bc067b03d1bf5a1bd2b516c0135ee81ae43c0d2cab9bcfdf,PodSandboxId:238b5b24a572eba24b52bf72fafa80a3a1105acc4ba0c3e58b255a5c6a8a7bc2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714943992554925716,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xt9l5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bbde9685-4494-40b7-bd53-9452fd970f5a,},Annotations:map[string]string{io.kubernetes.container.hash: ce6d6b7c,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e065fafa4b7aad342c5755ddbb2c69c3b4bd0efd44c9ce792388bc6c6f06121d,PodSandboxId:9f56aff0e5f86dabc75e935bba2e8f81a5f7d30f2613ed055fe026fab10ff612,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714943788390066842,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-78zmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e066e3ad-0574-44f9-acab-d7cec8b86788,},Annotations:map[string]string{io.kubernetes.container.hash: 3cdac550,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b360d142570dfeb8a0253680ccfd97861e90b700b1ce9e2e5b315244aed2a3b,PodSandboxId:cd560b1055b35f9f6500b5eef53e7f8250f4bc20a4f8e6562e8b30bff6235e38,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714943788394468442,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fqt45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
27bdadca-f49c-4f50-b09c-07dd6067f39a,},Annotations:map[string]string{io.kubernetes.container.hash: 8fa26f22,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63d1d40ce592576b3c3adab70629f977f025cc822b6fc2638f0afd5a8034b355,PodSandboxId:bca34597f1572fc75baa0a8f2853d06c3aa8e5aa575d8f5b15abb16332e61951,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1714943788301282386,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc212ac3-7499-4edc-b5a5-622b0bd4a891,},Annotations:map[string]string{io.kubernetes.container.hash: c207764,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57151a6a532bef4496bb9b3b51447c0e86324af7644760046c4f986f9bc000e6,PodSandboxId:1a6fd410f5e049e20c5d538edaa138d7e911b22709b53f7f82ef8a2c18cdc5c4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:171494378
6261169619,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lwtnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4033535e-69f1-426c-bb17-831fad6336d5,},Annotations:map[string]string{io.kubernetes.container.hash: 5400a271,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4da23c67204614119fd12bb11bb2dcc384609fe399a32f06e2ca61ff52a8438c,PodSandboxId:8b3a42343ade0d10dbd7caf52ae4583f67790d020c606261ab52cb7ceda9cd0a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714943786049309122,Labels:map[string]string
{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xdzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0b6492d-c0f5-45dd-8482-c447b81daa66,},Annotations:map[string]string{io.kubernetes.container.hash: 9b4684e6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abf4aae19a40108f61080f90924e47cd17198b595de08afa53c852acb001992f,PodSandboxId:f40e5905346ce4899eeb30b9f2633d4b891261ff9924be9e4b68d323a59bb10b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714943768245843963,Labels:map[string]string{i
o.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6f2a36965b5efe68085ec9ac39d075,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d73ef383ce1ab83dd2b2ada78891a4aa3835de2b4805157ac3e15584d0b4b29b,PodSandboxId:913466e1710aa2d1fa8e45f17fd08f82c114b7242bff697822c1bf5d2e0b7a3c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714943765197348186,Labels:map[string]string{io.kubernetes.container.name: kube
-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c588feae7d6204945d27bedaf4541d64,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b13d21aa2e8e79d9186e7d57f6c9dbcafdde76053f5205ee5d1bb46c65960d4f,PodSandboxId:82abab5bb480d2f5d83a105d70b76f2c29612396a2bf9abc6a4fd61fe5261c21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714943765127511026,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kuberne
tes.pod.name: kube-apiserver-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25cdcec1c37ba86157b0b42297dfe2cf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf39325,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ebcc8c1017ed40ef18eba191c6a6df12e34304aa025d65f0340c08e107ac43d,PodSandboxId:b3b0a14099e308d6e2e5766fe30514d082beb487575e5dbaa3e35778de907dd3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714943765026808480,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes
.pod.name: kube-controller-manager-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 578ccf60a9d00c195d5069c63fb0b319,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97769959b22d6e891106c90e8ae981d40cb02ac960e23bdfb007b3c50d50c923,PodSandboxId:01d81d8dc3bcbbd11e9a335f2a7aee378da1d273b127ffeb0f72b10080cb6bab,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714943765081603384,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-322980,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58f12977082107510fdbb696cd218155,},Annotations:map[string]string{io.kubernetes.container.hash: a7c6c285,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ae0f4d23-894b-4cd8-b660-e8400b23ae75 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d9743f3da0de5       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   238b5b24a572e       busybox-fc5497c4f-xt9l5
	0b360d142570d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   cd560b1055b35       coredns-7db6d8ff4d-fqt45
	e065fafa4b7aa       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   9f56aff0e5f86       coredns-7db6d8ff4d-78zmw
	63d1d40ce5925       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Running             storage-provisioner       0                   bca34597f1572       storage-provisioner
	57151a6a532be       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      7 minutes ago       Running             kindnet-cni               0                   1a6fd410f5e04       kindnet-lwtnx
	4da23c6720461       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      7 minutes ago       Running             kube-proxy                0                   8b3a42343ade0       kube-proxy-8xdzd
	abf4aae19a401       ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a     7 minutes ago       Running             kube-vip                  0                   f40e5905346ce       kube-vip-ha-322980
	d73ef383ce1ab       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      7 minutes ago       Running             kube-scheduler            0                   913466e1710aa       kube-scheduler-ha-322980
	b13d21aa2e8e7       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      7 minutes ago       Running             kube-apiserver            0                   82abab5bb480d       kube-apiserver-ha-322980
	97769959b22d6       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   01d81d8dc3bcb       etcd-ha-322980
	6ebcc8c1017ed       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      7 minutes ago       Running             kube-controller-manager   0                   b3b0a14099e30       kube-controller-manager-ha-322980
	
	
	==> coredns [0b360d142570dfeb8a0253680ccfd97861e90b700b1ce9e2e5b315244aed2a3b] <==
	[INFO] 10.244.1.2:40837 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.017605675s
	[INFO] 10.244.0.4:37323 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000501931s
	[INFO] 10.244.0.4:37770 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000105156s
	[INFO] 10.244.0.4:49857 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.002253399s
	[INFO] 10.244.2.2:55982 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000194151s
	[INFO] 10.244.1.2:51278 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00017965s
	[INFO] 10.244.1.2:37849 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000301689s
	[INFO] 10.244.0.4:58808 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118281s
	[INFO] 10.244.0.4:59347 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000074943s
	[INFO] 10.244.0.4:44264 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000127442s
	[INFO] 10.244.0.4:45870 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001035173s
	[INFO] 10.244.0.4:45397 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000126149s
	[INFO] 10.244.2.2:38985 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000241724s
	[INFO] 10.244.1.2:41200 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000185837s
	[INFO] 10.244.0.4:53459 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000188027s
	[INFO] 10.244.0.4:43760 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000146395s
	[INFO] 10.244.2.2:45375 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000112163s
	[INFO] 10.244.2.2:60638 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000225418s
	[INFO] 10.244.1.2:33012 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000251463s
	[INFO] 10.244.0.4:48613 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000079688s
	[INFO] 10.244.0.4:54870 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000050324s
	[INFO] 10.244.0.4:36700 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000167489s
	[INFO] 10.244.0.4:56859 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000077358s
	[INFO] 10.244.2.2:37153 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122063s
	[INFO] 10.244.2.2:43717 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000123902s
	
	
	==> coredns [e065fafa4b7aad342c5755ddbb2c69c3b4bd0efd44c9ce792388bc6c6f06121d] <==
	[INFO] 10.244.1.2:55822 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000180574s
	[INFO] 10.244.1.2:45364 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000154196s
	[INFO] 10.244.1.2:58343 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000454003s
	[INFO] 10.244.0.4:35231 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001621128s
	[INFO] 10.244.0.4:32984 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000051146s
	[INFO] 10.244.0.4:43928 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00004146s
	[INFO] 10.244.2.2:44358 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001832218s
	[INFO] 10.244.2.2:34081 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00017944s
	[INFO] 10.244.2.2:36047 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000087749s
	[INFO] 10.244.2.2:60557 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001143135s
	[INFO] 10.244.2.2:60835 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000073052s
	[INFO] 10.244.2.2:42876 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000093376s
	[INFO] 10.244.2.2:33057 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000070619s
	[INFO] 10.244.1.2:41910 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00009436s
	[INFO] 10.244.1.2:43839 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082555s
	[INFO] 10.244.1.2:39008 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000075851s
	[INFO] 10.244.0.4:47500 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000110566s
	[INFO] 10.244.0.4:44728 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071752s
	[INFO] 10.244.2.2:38205 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000222144s
	[INFO] 10.244.2.2:46321 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000164371s
	[INFO] 10.244.1.2:41080 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000205837s
	[INFO] 10.244.1.2:58822 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000264144s
	[INFO] 10.244.1.2:55995 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000174393s
	[INFO] 10.244.2.2:46471 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00069286s
	[INFO] 10.244.2.2:52414 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000163744s
	
	
	==> describe nodes <==
	Name:               ha-322980
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-322980
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=182cbbc99574885c654f8e32902368a71f76ddd3
	                    minikube.k8s.io/name=ha-322980
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_05T21_16_15_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 05 May 2024 21:16:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-322980
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 05 May 2024 21:23:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 05 May 2024 21:20:18 +0000   Sun, 05 May 2024 21:16:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 05 May 2024 21:20:18 +0000   Sun, 05 May 2024 21:16:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 05 May 2024 21:20:18 +0000   Sun, 05 May 2024 21:16:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 05 May 2024 21:20:18 +0000   Sun, 05 May 2024 21:16:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.178
	  Hostname:    ha-322980
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3a019ec328ab467ca04365748baaa367
	  System UUID:                3a019ec3-28ab-467c-a043-65748baaa367
	  Boot ID:                    c9018f9a-79b9-43c5-a307-9ae120187dfb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xt9l5              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m2s
	  kube-system                 coredns-7db6d8ff4d-78zmw             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m24s
	  kube-system                 coredns-7db6d8ff4d-fqt45             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m24s
	  kube-system                 etcd-ha-322980                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m35s
	  kube-system                 kindnet-lwtnx                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m24s
	  kube-system                 kube-apiserver-ha-322980             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m35s
	  kube-system                 kube-controller-manager-ha-322980    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m35s
	  kube-system                 kube-proxy-8xdzd                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m24s
	  kube-system                 kube-scheduler-ha-322980             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m35s
	  kube-system                 kube-vip-ha-322980                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m35s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m22s  kube-proxy       
	  Normal  Starting                 7m35s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m35s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m35s  kubelet          Node ha-322980 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m35s  kubelet          Node ha-322980 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m35s  kubelet          Node ha-322980 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m25s  node-controller  Node ha-322980 event: Registered Node ha-322980 in Controller
	  Normal  NodeReady                7m22s  kubelet          Node ha-322980 status is now: NodeReady
	  Normal  RegisteredNode           5m19s  node-controller  Node ha-322980 event: Registered Node ha-322980 in Controller
	  Normal  RegisteredNode           4m7s   node-controller  Node ha-322980 event: Registered Node ha-322980 in Controller
	
	
	Name:               ha-322980-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-322980-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=182cbbc99574885c654f8e32902368a71f76ddd3
	                    minikube.k8s.io/name=ha-322980
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_05T21_18_15_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 05 May 2024 21:18:12 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-322980-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 05 May 2024 21:21:26 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sun, 05 May 2024 21:20:15 +0000   Sun, 05 May 2024 21:22:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sun, 05 May 2024 21:20:15 +0000   Sun, 05 May 2024 21:22:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sun, 05 May 2024 21:20:15 +0000   Sun, 05 May 2024 21:22:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sun, 05 May 2024 21:20:15 +0000   Sun, 05 May 2024 21:22:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.228
	  Hostname:    ha-322980-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c5d1651406694de39b61eff245fccb61
	  System UUID:                c5d16514-0669-4de3-9b61-eff245fccb61
	  Boot ID:                    f0d34a2f-c3e3-4515-ab49-7c79a5c98854
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-tbmdd                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m2s
	  kube-system                 etcd-ha-322980-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m35s
	  kube-system                 kindnet-lmgkm                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m37s
	  kube-system                 kube-apiserver-ha-322980-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m35s
	  kube-system                 kube-controller-manager-ha-322980-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m31s
	  kube-system                 kube-proxy-wbf7q                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m37s
	  kube-system                 kube-scheduler-ha-322980-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m26s
	  kube-system                 kube-vip-ha-322980-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m31s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m37s (x8 over 5m37s)  kubelet          Node ha-322980-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m37s (x8 over 5m37s)  kubelet          Node ha-322980-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m37s (x7 over 5m37s)  kubelet          Node ha-322980-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m35s                  node-controller  Node ha-322980-m02 event: Registered Node ha-322980-m02 in Controller
	  Normal  RegisteredNode           5m19s                  node-controller  Node ha-322980-m02 event: Registered Node ha-322980-m02 in Controller
	  Normal  RegisteredNode           4m7s                   node-controller  Node ha-322980-m02 event: Registered Node ha-322980-m02 in Controller
	  Normal  NodeNotReady             102s                   node-controller  Node ha-322980-m02 status is now: NodeNotReady
	
	
	Name:               ha-322980-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-322980-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=182cbbc99574885c654f8e32902368a71f76ddd3
	                    minikube.k8s.io/name=ha-322980
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_05T21_19_27_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 05 May 2024 21:19:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-322980-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 05 May 2024 21:23:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 05 May 2024 21:19:53 +0000   Sun, 05 May 2024 21:19:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 05 May 2024 21:19:53 +0000   Sun, 05 May 2024 21:19:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 05 May 2024 21:19:53 +0000   Sun, 05 May 2024 21:19:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 05 May 2024 21:19:53 +0000   Sun, 05 May 2024 21:19:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.29
	  Hostname:    ha-322980-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1273ee04f2de426dbabc52e46998b0eb
	  System UUID:                1273ee04-f2de-426d-babc-52e46998b0eb
	  Boot ID:                    35fdaf53-db70-4446-a9c3-71a0744d3bea
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xz268                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m2s
	  kube-system                 etcd-ha-322980-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m24s
	  kube-system                 kindnet-ks55j                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m26s
	  kube-system                 kube-apiserver-ha-322980-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m24s
	  kube-system                 kube-controller-manager-ha-322980-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  kube-system                 kube-proxy-nqq6b                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m26s
	  kube-system                 kube-scheduler-ha-322980-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m24s
	  kube-system                 kube-vip-ha-322980-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m20s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m26s (x8 over 4m26s)  kubelet          Node ha-322980-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m26s (x8 over 4m26s)  kubelet          Node ha-322980-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m26s (x7 over 4m26s)  kubelet          Node ha-322980-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m25s                  node-controller  Node ha-322980-m03 event: Registered Node ha-322980-m03 in Controller
	  Normal  RegisteredNode           4m24s                  node-controller  Node ha-322980-m03 event: Registered Node ha-322980-m03 in Controller
	  Normal  RegisteredNode           4m7s                   node-controller  Node ha-322980-m03 event: Registered Node ha-322980-m03 in Controller
	
	
	Name:               ha-322980-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-322980-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=182cbbc99574885c654f8e32902368a71f76ddd3
	                    minikube.k8s.io/name=ha-322980
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_05T21_20_29_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 05 May 2024 21:20:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-322980-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 05 May 2024 21:23:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 05 May 2024 21:21:06 +0000   Sun, 05 May 2024 21:20:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 05 May 2024 21:21:06 +0000   Sun, 05 May 2024 21:20:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 05 May 2024 21:21:06 +0000   Sun, 05 May 2024 21:20:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 05 May 2024 21:21:06 +0000   Sun, 05 May 2024 21:21:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.169
	  Hostname:    ha-322980-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a4c8db3356b24ba197e491501ddbfd49
	  System UUID:                a4c8db33-56b2-4ba1-97e4-91501ddbfd49
	  Boot ID:                    9ee2f344-9fdd-4182-a447-83dc5b12dc4a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-nnc4q       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m21s
	  kube-system                 kube-proxy-68cxr    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m11s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m21s (x3 over 3m22s)  kubelet          Node ha-322980-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m21s (x3 over 3m22s)  kubelet          Node ha-322980-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m21s (x3 over 3m22s)  kubelet          Node ha-322980-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m20s                  node-controller  Node ha-322980-m04 event: Registered Node ha-322980-m04 in Controller
	  Normal  RegisteredNode           3m19s                  node-controller  Node ha-322980-m04 event: Registered Node ha-322980-m04 in Controller
	  Normal  RegisteredNode           3m17s                  node-controller  Node ha-322980-m04 event: Registered Node ha-322980-m04 in Controller
	  Normal  NodeReady                2m43s                  kubelet          Node ha-322980-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[May 5 21:15] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051886] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042048] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.638371] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.482228] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.738174] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.501831] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.064246] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066779] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.227983] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.115503] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.299594] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +5.048468] systemd-fstab-generator[777]: Ignoring "noauto" option for root device
	[  +0.072016] kauditd_printk_skb: 130 callbacks suppressed
	[May 5 21:16] systemd-fstab-generator[961]: Ignoring "noauto" option for root device
	[  +0.935027] kauditd_printk_skb: 57 callbacks suppressed
	[  +9.150561] systemd-fstab-generator[1376]: Ignoring "noauto" option for root device
	[  +0.089537] kauditd_printk_skb: 40 callbacks suppressed
	[ +11.653864] kauditd_printk_skb: 21 callbacks suppressed
	[May 5 21:18] kauditd_printk_skb: 74 callbacks suppressed
	
	
	==> etcd [97769959b22d6e891106c90e8ae981d40cb02ac960e23bdfb007b3c50d50c923] <==
	{"level":"warn","ts":"2024-05-05T21:23:49.25593Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:23:49.355971Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:23:49.370137Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:23:49.379453Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:23:49.383971Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:23:49.399809Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:23:49.409005Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:23:49.417682Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:23:49.424681Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:23:49.42902Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:23:49.438405Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:23:49.447252Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:23:49.454197Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:23:49.455116Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:23:49.45833Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:23:49.462516Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:23:49.470476Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:23:49.477505Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:23:49.48455Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:23:49.491238Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:23:49.495082Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:23:49.501837Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:23:49.507719Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:23:49.516104Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:23:49.555928Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 21:23:49 up 8 min,  0 users,  load average: 0.52, 0.39, 0.20
	Linux ha-322980 5.10.207 #1 SMP Tue Apr 30 22:38:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [57151a6a532bef4496bb9b3b51447c0e86324af7644760046c4f986f9bc000e6] <==
	I0505 21:23:18.383291       1 main.go:250] Node ha-322980-m04 has CIDR [10.244.3.0/24] 
	I0505 21:23:28.391510       1 main.go:223] Handling node with IPs: map[192.168.39.178:{}]
	I0505 21:23:28.391563       1 main.go:227] handling current node
	I0505 21:23:28.391584       1 main.go:223] Handling node with IPs: map[192.168.39.228:{}]
	I0505 21:23:28.391593       1 main.go:250] Node ha-322980-m02 has CIDR [10.244.1.0/24] 
	I0505 21:23:28.391727       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0505 21:23:28.391736       1 main.go:250] Node ha-322980-m03 has CIDR [10.244.2.0/24] 
	I0505 21:23:28.391894       1 main.go:223] Handling node with IPs: map[192.168.39.169:{}]
	I0505 21:23:28.391935       1 main.go:250] Node ha-322980-m04 has CIDR [10.244.3.0/24] 
	I0505 21:23:38.398021       1 main.go:223] Handling node with IPs: map[192.168.39.178:{}]
	I0505 21:23:38.398063       1 main.go:227] handling current node
	I0505 21:23:38.398075       1 main.go:223] Handling node with IPs: map[192.168.39.228:{}]
	I0505 21:23:38.398080       1 main.go:250] Node ha-322980-m02 has CIDR [10.244.1.0/24] 
	I0505 21:23:38.398184       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0505 21:23:38.398189       1 main.go:250] Node ha-322980-m03 has CIDR [10.244.2.0/24] 
	I0505 21:23:38.398237       1 main.go:223] Handling node with IPs: map[192.168.39.169:{}]
	I0505 21:23:38.398269       1 main.go:250] Node ha-322980-m04 has CIDR [10.244.3.0/24] 
	I0505 21:23:48.406137       1 main.go:223] Handling node with IPs: map[192.168.39.178:{}]
	I0505 21:23:48.406187       1 main.go:227] handling current node
	I0505 21:23:48.406199       1 main.go:223] Handling node with IPs: map[192.168.39.228:{}]
	I0505 21:23:48.406205       1 main.go:250] Node ha-322980-m02 has CIDR [10.244.1.0/24] 
	I0505 21:23:48.406322       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0505 21:23:48.406327       1 main.go:250] Node ha-322980-m03 has CIDR [10.244.2.0/24] 
	I0505 21:23:48.406373       1 main.go:223] Handling node with IPs: map[192.168.39.169:{}]
	I0505 21:23:48.406378       1 main.go:250] Node ha-322980-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [b13d21aa2e8e79d9186e7d57f6c9dbcafdde76053f5205ee5d1bb46c65960d4f] <==
	I0505 21:16:14.456940       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0505 21:16:14.471073       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0505 21:16:25.091428       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0505 21:16:25.243573       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0505 21:19:24.348476       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0505 21:19:24.348936       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0505 21:19:24.349065       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 10.874µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0505 21:19:24.350385       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0505 21:19:24.350566       1 timeout.go:142] post-timeout activity - time-elapsed: 2.907309ms, POST "/api/v1/namespaces/kube-system/events" result: <nil>
	E0505 21:19:53.980101       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45334: use of closed network connection
	E0505 21:19:54.191103       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45360: use of closed network connection
	E0505 21:19:54.425717       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45380: use of closed network connection
	E0505 21:19:54.674501       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45404: use of closed network connection
	E0505 21:19:54.892979       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60986: use of closed network connection
	E0505 21:19:55.095220       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32768: use of closed network connection
	E0505 21:19:55.318018       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32788: use of closed network connection
	E0505 21:19:55.521006       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32802: use of closed network connection
	E0505 21:19:55.726708       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32822: use of closed network connection
	E0505 21:19:56.050278       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32854: use of closed network connection
	E0505 21:19:56.261228       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32870: use of closed network connection
	E0505 21:19:56.472141       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32894: use of closed network connection
	E0505 21:19:56.685025       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32920: use of closed network connection
	E0505 21:19:56.916745       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32946: use of closed network connection
	E0505 21:19:57.119236       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32978: use of closed network connection
	W0505 21:21:40.559127       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.178 192.168.39.29]
	
	
	==> kube-controller-manager [6ebcc8c1017ed40ef18eba191c6a6df12e34304aa025d65f0340c08e107ac43d] <==
	I0505 21:19:23.570835       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-322980-m03" podCIDRs=["10.244.2.0/24"]
	I0505 21:19:24.568700       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-322980-m03"
	I0505 21:19:47.662623       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="93.19352ms"
	I0505 21:19:47.747354       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="84.336305ms"
	I0505 21:19:47.972892       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="225.464365ms"
	I0505 21:19:48.039165       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="66.156608ms"
	I0505 21:19:48.061198       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="21.948445ms"
	I0505 21:19:48.062605       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="114.6µs"
	I0505 21:19:48.123483       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.360242ms"
	I0505 21:19:48.124314       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="66.382µs"
	I0505 21:19:52.552250       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="66.118µs"
	I0505 21:19:52.698304       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.181575ms"
	I0505 21:19:52.700709       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="612.499µs"
	I0505 21:19:53.068287       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.874433ms"
	I0505 21:19:53.068456       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="105.906µs"
	I0505 21:19:53.481560       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.461316ms"
	I0505 21:19:53.481966       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="322.614µs"
	E0505 21:20:27.922891       1 certificate_controller.go:146] Sync csr-2rjtw failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-2rjtw": the object has been modified; please apply your changes to the latest version and try again
	I0505 21:20:28.227431       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-322980-m04\" does not exist"
	I0505 21:20:28.244528       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-322980-m04" podCIDRs=["10.244.3.0/24"]
	I0505 21:20:29.581280       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-322980-m04"
	I0505 21:21:06.985099       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-322980-m04"
	I0505 21:22:07.478633       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-322980-m04"
	I0505 21:22:07.677652       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.585128ms"
	I0505 21:22:07.678061       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="127.262µs"
	
	
	==> kube-proxy [4da23c67204614119fd12bb11bb2dcc384609fe399a32f06e2ca61ff52a8438c] <==
	I0505 21:16:26.420980       1 server_linux.go:69] "Using iptables proxy"
	I0505 21:16:26.431622       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.178"]
	I0505 21:16:26.625948       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0505 21:16:26.626022       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0505 21:16:26.626042       1 server_linux.go:165] "Using iptables Proxier"
	I0505 21:16:26.637113       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0505 21:16:26.637368       1 server.go:872] "Version info" version="v1.30.0"
	I0505 21:16:26.637407       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0505 21:16:26.638392       1 config.go:192] "Starting service config controller"
	I0505 21:16:26.638441       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0505 21:16:26.638467       1 config.go:101] "Starting endpoint slice config controller"
	I0505 21:16:26.638471       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0505 21:16:26.639227       1 config.go:319] "Starting node config controller"
	I0505 21:16:26.639264       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0505 21:16:26.739349       1 shared_informer.go:320] Caches are synced for node config
	I0505 21:16:26.739451       1 shared_informer.go:320] Caches are synced for service config
	I0505 21:16:26.739461       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [d73ef383ce1ab83dd2b2ada78891a4aa3835de2b4805157ac3e15584d0b4b29b] <==
	I0505 21:16:13.305853       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0505 21:19:47.876299       1 schedule_one.go:1069] "Error occurred" err="Pod default/busybox-fc5497c4f-p5jrm is already present in the active queue" pod="default/busybox-fc5497c4f-p5jrm"
	E0505 21:19:47.902396       1 schedule_one.go:1069] "Error occurred" err="Pod default/busybox-fc5497c4f-jsc6v is already present in the active queue" pod="default/busybox-fc5497c4f-jsc6v"
	E0505 21:20:28.319936       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-px9md\": pod kindnet-px9md is already assigned to node \"ha-322980-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-px9md" node="ha-322980-m04"
	E0505 21:20:28.321222       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod afeb8dbc-418f-484d-99aa-56a1a174965a(kube-system/kindnet-px9md) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-px9md"
	E0505 21:20:28.321301       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-px9md\": pod kindnet-px9md is already assigned to node \"ha-322980-m04\"" pod="kube-system/kindnet-px9md"
	I0505 21:20:28.321335       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-px9md" node="ha-322980-m04"
	E0505 21:20:28.320996       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-w4c7b\": pod kube-proxy-w4c7b is already assigned to node \"ha-322980-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-w4c7b" node="ha-322980-m04"
	E0505 21:20:28.326375       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 059c2bdf-8ad0-4281-b165-011150d463a6(kube-system/kube-proxy-w4c7b) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-w4c7b"
	E0505 21:20:28.326402       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-w4c7b\": pod kube-proxy-w4c7b is already assigned to node \"ha-322980-m04\"" pod="kube-system/kube-proxy-w4c7b"
	I0505 21:20:28.326454       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-w4c7b" node="ha-322980-m04"
	E0505 21:20:28.366965       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-nnc4q\": pod kindnet-nnc4q is already assigned to node \"ha-322980-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-nnc4q" node="ha-322980-m04"
	E0505 21:20:28.367080       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-nnc4q\": pod kindnet-nnc4q is already assigned to node \"ha-322980-m04\"" pod="kube-system/kindnet-nnc4q"
	E0505 21:20:28.369383       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-vmwcl\": pod kube-proxy-vmwcl is already assigned to node \"ha-322980-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-vmwcl" node="ha-322980-m04"
	E0505 21:20:28.369838       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 5169a0e2-c91d-413a-bbaa-87d14f7deb52(kube-system/kube-proxy-vmwcl) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-vmwcl"
	E0505 21:20:28.370049       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-vmwcl\": pod kube-proxy-vmwcl is already assigned to node \"ha-322980-m04\"" pod="kube-system/kube-proxy-vmwcl"
	I0505 21:20:28.370412       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-vmwcl" node="ha-322980-m04"
	E0505 21:20:28.480473       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-vnzzb\": pod kube-proxy-vnzzb is already assigned to node \"ha-322980-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-vnzzb" node="ha-322980-m04"
	E0505 21:20:28.482734       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 600688b7-2a22-48e5-88f0-1dc70996876b(kube-system/kube-proxy-vnzzb) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-vnzzb"
	E0505 21:20:28.482947       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-vnzzb\": pod kube-proxy-vnzzb is already assigned to node \"ha-322980-m04\"" pod="kube-system/kube-proxy-vnzzb"
	I0505 21:20:28.482999       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-vnzzb" node="ha-322980-m04"
	E0505 21:20:30.687013       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-tk4f6\": pod kube-proxy-tk4f6 is already assigned to node \"ha-322980-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-tk4f6" node="ha-322980-m04"
	E0505 21:20:30.687120       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod a81265bf-8396-46b9-b0f8-c8e1bf8271ee(kube-system/kube-proxy-tk4f6) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-tk4f6"
	E0505 21:20:30.687148       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-tk4f6\": pod kube-proxy-tk4f6 is already assigned to node \"ha-322980-m04\"" pod="kube-system/kube-proxy-tk4f6"
	I0505 21:20:30.687172       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-tk4f6" node="ha-322980-m04"
	
	
	==> kubelet <==
	May 05 21:19:47 ha-322980 kubelet[1385]: I0505 21:19:47.789880    1385 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgqcw\" (UniqueName: \"kubernetes.io/projected/bbde9685-4494-40b7-bd53-9452fd970f5a-kube-api-access-vgqcw\") pod \"busybox-fc5497c4f-xt9l5\" (UID: \"bbde9685-4494-40b7-bd53-9452fd970f5a\") " pod="default/busybox-fc5497c4f-xt9l5"
	May 05 21:19:48 ha-322980 kubelet[1385]: E0505 21:19:48.929889    1385 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	May 05 21:19:48 ha-322980 kubelet[1385]: E0505 21:19:48.930046    1385 projected.go:200] Error preparing data for projected volume kube-api-access-vgqcw for pod default/busybox-fc5497c4f-xt9l5: failed to sync configmap cache: timed out waiting for the condition
	May 05 21:19:48 ha-322980 kubelet[1385]: E0505 21:19:48.930323    1385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bbde9685-4494-40b7-bd53-9452fd970f5a-kube-api-access-vgqcw podName:bbde9685-4494-40b7-bd53-9452fd970f5a nodeName:}" failed. No retries permitted until 2024-05-05 21:19:49.430176697 +0000 UTC m=+215.201890772 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vgqcw" (UniqueName: "kubernetes.io/projected/bbde9685-4494-40b7-bd53-9452fd970f5a-kube-api-access-vgqcw") pod "busybox-fc5497c4f-xt9l5" (UID: "bbde9685-4494-40b7-bd53-9452fd970f5a") : failed to sync configmap cache: timed out waiting for the condition
	May 05 21:19:53 ha-322980 kubelet[1385]: I0505 21:19:53.430690    1385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-xt9l5" podStartSLOduration=3.882587934 podStartE2EDuration="6.430639942s" podCreationTimestamp="2024-05-05 21:19:47 +0000 UTC" firstStartedPulling="2024-05-05 21:19:49.986539851 +0000 UTC m=+215.758253927" lastFinishedPulling="2024-05-05 21:19:52.534591852 +0000 UTC m=+218.306305935" observedRunningTime="2024-05-05 21:19:53.430559611 +0000 UTC m=+219.202273705" watchObservedRunningTime="2024-05-05 21:19:53.430639942 +0000 UTC m=+219.202354038"
	May 05 21:20:14 ha-322980 kubelet[1385]: E0505 21:20:14.419155    1385 iptables.go:577] "Could not set up iptables canary" err=<
	May 05 21:20:14 ha-322980 kubelet[1385]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 05 21:20:14 ha-322980 kubelet[1385]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 05 21:20:14 ha-322980 kubelet[1385]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 05 21:20:14 ha-322980 kubelet[1385]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 05 21:21:14 ha-322980 kubelet[1385]: E0505 21:21:14.406480    1385 iptables.go:577] "Could not set up iptables canary" err=<
	May 05 21:21:14 ha-322980 kubelet[1385]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 05 21:21:14 ha-322980 kubelet[1385]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 05 21:21:14 ha-322980 kubelet[1385]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 05 21:21:14 ha-322980 kubelet[1385]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 05 21:22:14 ha-322980 kubelet[1385]: E0505 21:22:14.409202    1385 iptables.go:577] "Could not set up iptables canary" err=<
	May 05 21:22:14 ha-322980 kubelet[1385]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 05 21:22:14 ha-322980 kubelet[1385]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 05 21:22:14 ha-322980 kubelet[1385]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 05 21:22:14 ha-322980 kubelet[1385]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 05 21:23:14 ha-322980 kubelet[1385]: E0505 21:23:14.412106    1385 iptables.go:577] "Could not set up iptables canary" err=<
	May 05 21:23:14 ha-322980 kubelet[1385]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 05 21:23:14 ha-322980 kubelet[1385]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 05 21:23:14 ha-322980 kubelet[1385]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 05 21:23:14 ha-322980 kubelet[1385]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-322980 -n ha-322980
helpers_test.go:261: (dbg) Run:  kubectl --context ha-322980 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (142.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (55.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-322980 status -v=7 --alsologtostderr: exit status 3 (3.193635044s)

                                                
                                                
-- stdout --
	ha-322980
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-322980-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-322980-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-322980-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 21:23:54.243960   34916 out.go:291] Setting OutFile to fd 1 ...
	I0505 21:23:54.244112   34916 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 21:23:54.244123   34916 out.go:304] Setting ErrFile to fd 2...
	I0505 21:23:54.244132   34916 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 21:23:54.244319   34916 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18602-11466/.minikube/bin
	I0505 21:23:54.244515   34916 out.go:298] Setting JSON to false
	I0505 21:23:54.244546   34916 mustload.go:65] Loading cluster: ha-322980
	I0505 21:23:54.244664   34916 notify.go:220] Checking for updates...
	I0505 21:23:54.244966   34916 config.go:182] Loaded profile config "ha-322980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 21:23:54.244983   34916 status.go:255] checking status of ha-322980 ...
	I0505 21:23:54.245398   34916 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:23:54.245487   34916 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:23:54.264065   34916 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43495
	I0505 21:23:54.264582   34916 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:23:54.265148   34916 main.go:141] libmachine: Using API Version  1
	I0505 21:23:54.265176   34916 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:23:54.265614   34916 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:23:54.265903   34916 main.go:141] libmachine: (ha-322980) Calling .GetState
	I0505 21:23:54.267729   34916 status.go:330] ha-322980 host status = "Running" (err=<nil>)
	I0505 21:23:54.267757   34916 host.go:66] Checking if "ha-322980" exists ...
	I0505 21:23:54.268080   34916 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:23:54.268139   34916 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:23:54.284104   34916 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44483
	I0505 21:23:54.284496   34916 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:23:54.284939   34916 main.go:141] libmachine: Using API Version  1
	I0505 21:23:54.284959   34916 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:23:54.285298   34916 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:23:54.285494   34916 main.go:141] libmachine: (ha-322980) Calling .GetIP
	I0505 21:23:54.288513   34916 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:23:54.289005   34916 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:23:54.289032   34916 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:23:54.289218   34916 host.go:66] Checking if "ha-322980" exists ...
	I0505 21:23:54.289494   34916 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:23:54.289538   34916 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:23:54.304736   34916 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34655
	I0505 21:23:54.305179   34916 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:23:54.305640   34916 main.go:141] libmachine: Using API Version  1
	I0505 21:23:54.305663   34916 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:23:54.305941   34916 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:23:54.306161   34916 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:23:54.306350   34916 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0505 21:23:54.306382   34916 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:23:54.308866   34916 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:23:54.309242   34916 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:23:54.309264   34916 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:23:54.309493   34916 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:23:54.309649   34916 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:23:54.309780   34916 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:23:54.309932   34916 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa Username:docker}
	I0505 21:23:54.399653   34916 ssh_runner.go:195] Run: systemctl --version
	I0505 21:23:54.406641   34916 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 21:23:54.424155   34916 kubeconfig.go:125] found "ha-322980" server: "https://192.168.39.254:8443"
	I0505 21:23:54.424197   34916 api_server.go:166] Checking apiserver status ...
	I0505 21:23:54.424233   34916 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 21:23:54.440370   34916 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1206/cgroup
	W0505 21:23:54.451444   34916 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1206/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0505 21:23:54.451539   34916 ssh_runner.go:195] Run: ls
	I0505 21:23:54.457653   34916 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0505 21:23:54.462287   34916 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0505 21:23:54.462309   34916 status.go:422] ha-322980 apiserver status = Running (err=<nil>)
	I0505 21:23:54.462319   34916 status.go:257] ha-322980 status: &{Name:ha-322980 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0505 21:23:54.462336   34916 status.go:255] checking status of ha-322980-m02 ...
	I0505 21:23:54.462643   34916 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:23:54.462671   34916 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:23:54.478449   34916 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44391
	I0505 21:23:54.478905   34916 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:23:54.479460   34916 main.go:141] libmachine: Using API Version  1
	I0505 21:23:54.479493   34916 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:23:54.479833   34916 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:23:54.480037   34916 main.go:141] libmachine: (ha-322980-m02) Calling .GetState
	I0505 21:23:54.481787   34916 status.go:330] ha-322980-m02 host status = "Running" (err=<nil>)
	I0505 21:23:54.481803   34916 host.go:66] Checking if "ha-322980-m02" exists ...
	I0505 21:23:54.482199   34916 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:23:54.482243   34916 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:23:54.497268   34916 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35939
	I0505 21:23:54.497731   34916 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:23:54.498321   34916 main.go:141] libmachine: Using API Version  1
	I0505 21:23:54.498347   34916 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:23:54.498727   34916 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:23:54.498917   34916 main.go:141] libmachine: (ha-322980-m02) Calling .GetIP
	I0505 21:23:54.501794   34916 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:23:54.502193   34916 main.go:141] libmachine: (ha-322980-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:59:b4", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:16:42 +0000 UTC Type:0 Mac:52:54:00:91:59:b4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-322980-m02 Clientid:01:52:54:00:91:59:b4}
	I0505 21:23:54.502221   34916 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:23:54.502345   34916 host.go:66] Checking if "ha-322980-m02" exists ...
	I0505 21:23:54.502655   34916 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:23:54.502682   34916 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:23:54.519051   34916 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45723
	I0505 21:23:54.519549   34916 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:23:54.519995   34916 main.go:141] libmachine: Using API Version  1
	I0505 21:23:54.520023   34916 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:23:54.520415   34916 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:23:54.520578   34916 main.go:141] libmachine: (ha-322980-m02) Calling .DriverName
	I0505 21:23:54.520747   34916 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0505 21:23:54.520765   34916 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHHostname
	I0505 21:23:54.523610   34916 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:23:54.524019   34916 main.go:141] libmachine: (ha-322980-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:59:b4", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:16:42 +0000 UTC Type:0 Mac:52:54:00:91:59:b4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-322980-m02 Clientid:01:52:54:00:91:59:b4}
	I0505 21:23:54.524046   34916 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:23:54.524215   34916 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHPort
	I0505 21:23:54.524393   34916 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHKeyPath
	I0505 21:23:54.524513   34916 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHUsername
	I0505 21:23:54.524631   34916 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m02/id_rsa Username:docker}
	W0505 21:23:57.019740   34916 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.228:22: connect: no route to host
	W0505 21:23:57.019833   34916 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.228:22: connect: no route to host
	E0505 21:23:57.019852   34916 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.228:22: connect: no route to host
	I0505 21:23:57.019859   34916 status.go:257] ha-322980-m02 status: &{Name:ha-322980-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0505 21:23:57.019876   34916 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.228:22: connect: no route to host
	I0505 21:23:57.019883   34916 status.go:255] checking status of ha-322980-m03 ...
	I0505 21:23:57.020227   34916 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:23:57.020285   34916 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:23:57.035283   34916 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35215
	I0505 21:23:57.035762   34916 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:23:57.036215   34916 main.go:141] libmachine: Using API Version  1
	I0505 21:23:57.036242   34916 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:23:57.036545   34916 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:23:57.036746   34916 main.go:141] libmachine: (ha-322980-m03) Calling .GetState
	I0505 21:23:57.038398   34916 status.go:330] ha-322980-m03 host status = "Running" (err=<nil>)
	I0505 21:23:57.038417   34916 host.go:66] Checking if "ha-322980-m03" exists ...
	I0505 21:23:57.038844   34916 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:23:57.038873   34916 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:23:57.052806   34916 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46515
	I0505 21:23:57.053195   34916 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:23:57.053763   34916 main.go:141] libmachine: Using API Version  1
	I0505 21:23:57.053780   34916 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:23:57.054156   34916 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:23:57.054387   34916 main.go:141] libmachine: (ha-322980-m03) Calling .GetIP
	I0505 21:23:57.057273   34916 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:23:57.057726   34916 main.go:141] libmachine: (ha-322980-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:64:b7", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:18:47 +0000 UTC Type:0 Mac:52:54:00:c6:64:b7 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ha-322980-m03 Clientid:01:52:54:00:c6:64:b7}
	I0505 21:23:57.057756   34916 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined IP address 192.168.39.29 and MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:23:57.057812   34916 host.go:66] Checking if "ha-322980-m03" exists ...
	I0505 21:23:57.058131   34916 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:23:57.058163   34916 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:23:57.072719   34916 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42455
	I0505 21:23:57.073177   34916 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:23:57.073711   34916 main.go:141] libmachine: Using API Version  1
	I0505 21:23:57.073730   34916 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:23:57.074027   34916 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:23:57.074201   34916 main.go:141] libmachine: (ha-322980-m03) Calling .DriverName
	I0505 21:23:57.074357   34916 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0505 21:23:57.074378   34916 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHHostname
	I0505 21:23:57.076768   34916 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:23:57.077129   34916 main.go:141] libmachine: (ha-322980-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:64:b7", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:18:47 +0000 UTC Type:0 Mac:52:54:00:c6:64:b7 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ha-322980-m03 Clientid:01:52:54:00:c6:64:b7}
	I0505 21:23:57.077150   34916 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined IP address 192.168.39.29 and MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:23:57.077294   34916 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHPort
	I0505 21:23:57.077409   34916 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHKeyPath
	I0505 21:23:57.077500   34916 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHUsername
	I0505 21:23:57.077579   34916 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m03/id_rsa Username:docker}
	I0505 21:23:57.160878   34916 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 21:23:57.177461   34916 kubeconfig.go:125] found "ha-322980" server: "https://192.168.39.254:8443"
	I0505 21:23:57.177490   34916 api_server.go:166] Checking apiserver status ...
	I0505 21:23:57.177528   34916 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 21:23:57.198820   34916 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1549/cgroup
	W0505 21:23:57.210535   34916 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1549/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0505 21:23:57.210597   34916 ssh_runner.go:195] Run: ls
	I0505 21:23:57.216269   34916 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0505 21:23:57.220894   34916 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0505 21:23:57.220919   34916 status.go:422] ha-322980-m03 apiserver status = Running (err=<nil>)
	I0505 21:23:57.220930   34916 status.go:257] ha-322980-m03 status: &{Name:ha-322980-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0505 21:23:57.220949   34916 status.go:255] checking status of ha-322980-m04 ...
	I0505 21:23:57.221239   34916 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:23:57.221269   34916 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:23:57.235830   34916 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40565
	I0505 21:23:57.236181   34916 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:23:57.236611   34916 main.go:141] libmachine: Using API Version  1
	I0505 21:23:57.236637   34916 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:23:57.236913   34916 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:23:57.237111   34916 main.go:141] libmachine: (ha-322980-m04) Calling .GetState
	I0505 21:23:57.238418   34916 status.go:330] ha-322980-m04 host status = "Running" (err=<nil>)
	I0505 21:23:57.238445   34916 host.go:66] Checking if "ha-322980-m04" exists ...
	I0505 21:23:57.238850   34916 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:23:57.238900   34916 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:23:57.253090   34916 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42643
	I0505 21:23:57.253463   34916 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:23:57.253834   34916 main.go:141] libmachine: Using API Version  1
	I0505 21:23:57.253859   34916 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:23:57.254172   34916 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:23:57.254375   34916 main.go:141] libmachine: (ha-322980-m04) Calling .GetIP
	I0505 21:23:57.257063   34916 main.go:141] libmachine: (ha-322980-m04) DBG | domain ha-322980-m04 has defined MAC address 52:54:00:dd:17:2b in network mk-ha-322980
	I0505 21:23:57.257499   34916 main.go:141] libmachine: (ha-322980-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:17:2b", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:20:13 +0000 UTC Type:0 Mac:52:54:00:dd:17:2b Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-322980-m04 Clientid:01:52:54:00:dd:17:2b}
	I0505 21:23:57.257543   34916 main.go:141] libmachine: (ha-322980-m04) DBG | domain ha-322980-m04 has defined IP address 192.168.39.169 and MAC address 52:54:00:dd:17:2b in network mk-ha-322980
	I0505 21:23:57.257732   34916 host.go:66] Checking if "ha-322980-m04" exists ...
	I0505 21:23:57.258054   34916 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:23:57.258087   34916 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:23:57.272150   34916 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37175
	I0505 21:23:57.272676   34916 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:23:57.273200   34916 main.go:141] libmachine: Using API Version  1
	I0505 21:23:57.273225   34916 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:23:57.273562   34916 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:23:57.273868   34916 main.go:141] libmachine: (ha-322980-m04) Calling .DriverName
	I0505 21:23:57.274076   34916 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0505 21:23:57.274099   34916 main.go:141] libmachine: (ha-322980-m04) Calling .GetSSHHostname
	I0505 21:23:57.277177   34916 main.go:141] libmachine: (ha-322980-m04) DBG | domain ha-322980-m04 has defined MAC address 52:54:00:dd:17:2b in network mk-ha-322980
	I0505 21:23:57.277573   34916 main.go:141] libmachine: (ha-322980-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:17:2b", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:20:13 +0000 UTC Type:0 Mac:52:54:00:dd:17:2b Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-322980-m04 Clientid:01:52:54:00:dd:17:2b}
	I0505 21:23:57.277604   34916 main.go:141] libmachine: (ha-322980-m04) DBG | domain ha-322980-m04 has defined IP address 192.168.39.169 and MAC address 52:54:00:dd:17:2b in network mk-ha-322980
	I0505 21:23:57.277754   34916 main.go:141] libmachine: (ha-322980-m04) Calling .GetSSHPort
	I0505 21:23:57.277917   34916 main.go:141] libmachine: (ha-322980-m04) Calling .GetSSHKeyPath
	I0505 21:23:57.278096   34916 main.go:141] libmachine: (ha-322980-m04) Calling .GetSSHUsername
	I0505 21:23:57.278238   34916 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m04/id_rsa Username:docker}
	I0505 21:23:57.363985   34916 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 21:23:57.381331   34916 status.go:257] ha-322980-m04 status: &{Name:ha-322980-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-322980 status -v=7 --alsologtostderr: exit status 3 (4.852179589s)

                                                
                                                
-- stdout --
	ha-322980
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-322980-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-322980-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-322980-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 21:23:58.895993   35016 out.go:291] Setting OutFile to fd 1 ...
	I0505 21:23:58.896229   35016 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 21:23:58.896239   35016 out.go:304] Setting ErrFile to fd 2...
	I0505 21:23:58.896245   35016 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 21:23:58.896452   35016 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18602-11466/.minikube/bin
	I0505 21:23:58.896617   35016 out.go:298] Setting JSON to false
	I0505 21:23:58.896646   35016 mustload.go:65] Loading cluster: ha-322980
	I0505 21:23:58.896741   35016 notify.go:220] Checking for updates...
	I0505 21:23:58.897094   35016 config.go:182] Loaded profile config "ha-322980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 21:23:58.897115   35016 status.go:255] checking status of ha-322980 ...
	I0505 21:23:58.897523   35016 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:23:58.897597   35016 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:23:58.912989   35016 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34479
	I0505 21:23:58.913334   35016 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:23:58.913889   35016 main.go:141] libmachine: Using API Version  1
	I0505 21:23:58.913914   35016 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:23:58.914339   35016 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:23:58.914578   35016 main.go:141] libmachine: (ha-322980) Calling .GetState
	I0505 21:23:58.916307   35016 status.go:330] ha-322980 host status = "Running" (err=<nil>)
	I0505 21:23:58.916320   35016 host.go:66] Checking if "ha-322980" exists ...
	I0505 21:23:58.916593   35016 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:23:58.916637   35016 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:23:58.930887   35016 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46699
	I0505 21:23:58.931254   35016 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:23:58.931716   35016 main.go:141] libmachine: Using API Version  1
	I0505 21:23:58.931740   35016 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:23:58.932021   35016 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:23:58.932225   35016 main.go:141] libmachine: (ha-322980) Calling .GetIP
	I0505 21:23:58.935030   35016 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:23:58.935461   35016 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:23:58.935501   35016 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:23:58.935656   35016 host.go:66] Checking if "ha-322980" exists ...
	I0505 21:23:58.935931   35016 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:23:58.935967   35016 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:23:58.950530   35016 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36603
	I0505 21:23:58.950951   35016 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:23:58.951395   35016 main.go:141] libmachine: Using API Version  1
	I0505 21:23:58.951422   35016 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:23:58.951845   35016 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:23:58.952042   35016 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:23:58.952243   35016 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0505 21:23:58.952264   35016 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:23:58.954881   35016 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:23:58.955331   35016 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:23:58.955357   35016 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:23:58.955467   35016 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:23:58.955660   35016 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:23:58.955789   35016 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:23:58.955923   35016 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa Username:docker}
	I0505 21:23:59.041133   35016 ssh_runner.go:195] Run: systemctl --version
	I0505 21:23:59.048904   35016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 21:23:59.070948   35016 kubeconfig.go:125] found "ha-322980" server: "https://192.168.39.254:8443"
	I0505 21:23:59.070977   35016 api_server.go:166] Checking apiserver status ...
	I0505 21:23:59.071008   35016 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 21:23:59.085927   35016 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1206/cgroup
	W0505 21:23:59.098430   35016 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1206/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0505 21:23:59.098475   35016 ssh_runner.go:195] Run: ls
	I0505 21:23:59.103723   35016 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0505 21:23:59.108216   35016 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0505 21:23:59.108238   35016 status.go:422] ha-322980 apiserver status = Running (err=<nil>)
	I0505 21:23:59.108250   35016 status.go:257] ha-322980 status: &{Name:ha-322980 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0505 21:23:59.108292   35016 status.go:255] checking status of ha-322980-m02 ...
	I0505 21:23:59.108601   35016 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:23:59.108632   35016 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:23:59.123566   35016 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37845
	I0505 21:23:59.123972   35016 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:23:59.124434   35016 main.go:141] libmachine: Using API Version  1
	I0505 21:23:59.124453   35016 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:23:59.124721   35016 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:23:59.124909   35016 main.go:141] libmachine: (ha-322980-m02) Calling .GetState
	I0505 21:23:59.126435   35016 status.go:330] ha-322980-m02 host status = "Running" (err=<nil>)
	I0505 21:23:59.126449   35016 host.go:66] Checking if "ha-322980-m02" exists ...
	I0505 21:23:59.126806   35016 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:23:59.126838   35016 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:23:59.141001   35016 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46433
	I0505 21:23:59.141455   35016 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:23:59.141913   35016 main.go:141] libmachine: Using API Version  1
	I0505 21:23:59.141933   35016 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:23:59.142212   35016 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:23:59.142374   35016 main.go:141] libmachine: (ha-322980-m02) Calling .GetIP
	I0505 21:23:59.145264   35016 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:23:59.145694   35016 main.go:141] libmachine: (ha-322980-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:59:b4", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:16:42 +0000 UTC Type:0 Mac:52:54:00:91:59:b4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-322980-m02 Clientid:01:52:54:00:91:59:b4}
	I0505 21:23:59.145736   35016 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:23:59.145915   35016 host.go:66] Checking if "ha-322980-m02" exists ...
	I0505 21:23:59.146253   35016 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:23:59.146310   35016 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:23:59.162009   35016 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32959
	I0505 21:23:59.162422   35016 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:23:59.162857   35016 main.go:141] libmachine: Using API Version  1
	I0505 21:23:59.162877   35016 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:23:59.163147   35016 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:23:59.163339   35016 main.go:141] libmachine: (ha-322980-m02) Calling .DriverName
	I0505 21:23:59.163544   35016 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0505 21:23:59.163569   35016 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHHostname
	I0505 21:23:59.165998   35016 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:23:59.166375   35016 main.go:141] libmachine: (ha-322980-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:59:b4", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:16:42 +0000 UTC Type:0 Mac:52:54:00:91:59:b4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-322980-m02 Clientid:01:52:54:00:91:59:b4}
	I0505 21:23:59.166403   35016 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:23:59.166551   35016 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHPort
	I0505 21:23:59.166707   35016 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHKeyPath
	I0505 21:23:59.166850   35016 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHUsername
	I0505 21:23:59.166957   35016 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m02/id_rsa Username:docker}
	W0505 21:24:00.091723   35016 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.228:22: connect: no route to host
	I0505 21:24:00.091771   35016 retry.go:31] will retry after 161.727546ms: dial tcp 192.168.39.228:22: connect: no route to host
	W0505 21:24:03.323721   35016 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.228:22: connect: no route to host
	W0505 21:24:03.323810   35016 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.228:22: connect: no route to host
	E0505 21:24:03.323836   35016 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.228:22: connect: no route to host
	I0505 21:24:03.323845   35016 status.go:257] ha-322980-m02 status: &{Name:ha-322980-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0505 21:24:03.323895   35016 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.228:22: connect: no route to host
	I0505 21:24:03.323904   35016 status.go:255] checking status of ha-322980-m03 ...
	I0505 21:24:03.324206   35016 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:03.324247   35016 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:03.339089   35016 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42783
	I0505 21:24:03.339520   35016 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:03.340023   35016 main.go:141] libmachine: Using API Version  1
	I0505 21:24:03.340047   35016 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:03.340348   35016 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:03.340526   35016 main.go:141] libmachine: (ha-322980-m03) Calling .GetState
	I0505 21:24:03.342208   35016 status.go:330] ha-322980-m03 host status = "Running" (err=<nil>)
	I0505 21:24:03.342237   35016 host.go:66] Checking if "ha-322980-m03" exists ...
	I0505 21:24:03.342573   35016 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:03.342609   35016 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:03.358227   35016 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41203
	I0505 21:24:03.358641   35016 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:03.359106   35016 main.go:141] libmachine: Using API Version  1
	I0505 21:24:03.359132   35016 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:03.359426   35016 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:03.359618   35016 main.go:141] libmachine: (ha-322980-m03) Calling .GetIP
	I0505 21:24:03.362385   35016 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:24:03.362801   35016 main.go:141] libmachine: (ha-322980-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:64:b7", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:18:47 +0000 UTC Type:0 Mac:52:54:00:c6:64:b7 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ha-322980-m03 Clientid:01:52:54:00:c6:64:b7}
	I0505 21:24:03.362829   35016 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined IP address 192.168.39.29 and MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:24:03.362962   35016 host.go:66] Checking if "ha-322980-m03" exists ...
	I0505 21:24:03.363309   35016 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:03.363350   35016 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:03.377717   35016 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37259
	I0505 21:24:03.378082   35016 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:03.378485   35016 main.go:141] libmachine: Using API Version  1
	I0505 21:24:03.378507   35016 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:03.378862   35016 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:03.379076   35016 main.go:141] libmachine: (ha-322980-m03) Calling .DriverName
	I0505 21:24:03.379288   35016 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0505 21:24:03.379313   35016 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHHostname
	I0505 21:24:03.382080   35016 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:24:03.382496   35016 main.go:141] libmachine: (ha-322980-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:64:b7", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:18:47 +0000 UTC Type:0 Mac:52:54:00:c6:64:b7 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ha-322980-m03 Clientid:01:52:54:00:c6:64:b7}
	I0505 21:24:03.382516   35016 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined IP address 192.168.39.29 and MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:24:03.382710   35016 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHPort
	I0505 21:24:03.382871   35016 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHKeyPath
	I0505 21:24:03.383035   35016 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHUsername
	I0505 21:24:03.383164   35016 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m03/id_rsa Username:docker}
	I0505 21:24:03.467648   35016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 21:24:03.485325   35016 kubeconfig.go:125] found "ha-322980" server: "https://192.168.39.254:8443"
	I0505 21:24:03.485354   35016 api_server.go:166] Checking apiserver status ...
	I0505 21:24:03.485394   35016 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 21:24:03.502789   35016 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1549/cgroup
	W0505 21:24:03.517255   35016 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1549/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0505 21:24:03.517303   35016 ssh_runner.go:195] Run: ls
	I0505 21:24:03.522832   35016 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0505 21:24:03.527231   35016 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0505 21:24:03.527250   35016 status.go:422] ha-322980-m03 apiserver status = Running (err=<nil>)
	I0505 21:24:03.527258   35016 status.go:257] ha-322980-m03 status: &{Name:ha-322980-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0505 21:24:03.527276   35016 status.go:255] checking status of ha-322980-m04 ...
	I0505 21:24:03.527586   35016 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:03.527623   35016 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:03.542225   35016 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44173
	I0505 21:24:03.542684   35016 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:03.543208   35016 main.go:141] libmachine: Using API Version  1
	I0505 21:24:03.543230   35016 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:03.543532   35016 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:03.543727   35016 main.go:141] libmachine: (ha-322980-m04) Calling .GetState
	I0505 21:24:03.545513   35016 status.go:330] ha-322980-m04 host status = "Running" (err=<nil>)
	I0505 21:24:03.545529   35016 host.go:66] Checking if "ha-322980-m04" exists ...
	I0505 21:24:03.545832   35016 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:03.545871   35016 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:03.559948   35016 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39473
	I0505 21:24:03.560339   35016 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:03.560789   35016 main.go:141] libmachine: Using API Version  1
	I0505 21:24:03.560808   35016 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:03.561099   35016 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:03.561267   35016 main.go:141] libmachine: (ha-322980-m04) Calling .GetIP
	I0505 21:24:03.563926   35016 main.go:141] libmachine: (ha-322980-m04) DBG | domain ha-322980-m04 has defined MAC address 52:54:00:dd:17:2b in network mk-ha-322980
	I0505 21:24:03.564294   35016 main.go:141] libmachine: (ha-322980-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:17:2b", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:20:13 +0000 UTC Type:0 Mac:52:54:00:dd:17:2b Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-322980-m04 Clientid:01:52:54:00:dd:17:2b}
	I0505 21:24:03.564318   35016 main.go:141] libmachine: (ha-322980-m04) DBG | domain ha-322980-m04 has defined IP address 192.168.39.169 and MAC address 52:54:00:dd:17:2b in network mk-ha-322980
	I0505 21:24:03.564445   35016 host.go:66] Checking if "ha-322980-m04" exists ...
	I0505 21:24:03.564714   35016 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:03.564768   35016 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:03.579192   35016 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39351
	I0505 21:24:03.579660   35016 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:03.580160   35016 main.go:141] libmachine: Using API Version  1
	I0505 21:24:03.580183   35016 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:03.580478   35016 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:03.580644   35016 main.go:141] libmachine: (ha-322980-m04) Calling .DriverName
	I0505 21:24:03.580847   35016 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0505 21:24:03.580870   35016 main.go:141] libmachine: (ha-322980-m04) Calling .GetSSHHostname
	I0505 21:24:03.583375   35016 main.go:141] libmachine: (ha-322980-m04) DBG | domain ha-322980-m04 has defined MAC address 52:54:00:dd:17:2b in network mk-ha-322980
	I0505 21:24:03.583998   35016 main.go:141] libmachine: (ha-322980-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:17:2b", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:20:13 +0000 UTC Type:0 Mac:52:54:00:dd:17:2b Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-322980-m04 Clientid:01:52:54:00:dd:17:2b}
	I0505 21:24:03.584018   35016 main.go:141] libmachine: (ha-322980-m04) DBG | domain ha-322980-m04 has defined IP address 192.168.39.169 and MAC address 52:54:00:dd:17:2b in network mk-ha-322980
	I0505 21:24:03.584180   35016 main.go:141] libmachine: (ha-322980-m04) Calling .GetSSHPort
	I0505 21:24:03.584378   35016 main.go:141] libmachine: (ha-322980-m04) Calling .GetSSHKeyPath
	I0505 21:24:03.584591   35016 main.go:141] libmachine: (ha-322980-m04) Calling .GetSSHUsername
	I0505 21:24:03.584747   35016 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m04/id_rsa Username:docker}
	I0505 21:24:03.671904   35016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 21:24:03.692268   35016 status.go:257] ha-322980-m04 status: &{Name:ha-322980-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-322980 status -v=7 --alsologtostderr: exit status 3 (5.084570734s)

                                                
                                                
-- stdout --
	ha-322980
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-322980-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-322980-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-322980-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 21:24:04.797836   35116 out.go:291] Setting OutFile to fd 1 ...
	I0505 21:24:04.797965   35116 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 21:24:04.797975   35116 out.go:304] Setting ErrFile to fd 2...
	I0505 21:24:04.797981   35116 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 21:24:04.798187   35116 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18602-11466/.minikube/bin
	I0505 21:24:04.798360   35116 out.go:298] Setting JSON to false
	I0505 21:24:04.798388   35116 mustload.go:65] Loading cluster: ha-322980
	I0505 21:24:04.798444   35116 notify.go:220] Checking for updates...
	I0505 21:24:04.798904   35116 config.go:182] Loaded profile config "ha-322980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 21:24:04.798924   35116 status.go:255] checking status of ha-322980 ...
	I0505 21:24:04.799406   35116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:04.799461   35116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:04.814584   35116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41673
	I0505 21:24:04.815053   35116 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:04.815639   35116 main.go:141] libmachine: Using API Version  1
	I0505 21:24:04.815656   35116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:04.815992   35116 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:04.816212   35116 main.go:141] libmachine: (ha-322980) Calling .GetState
	I0505 21:24:04.817557   35116 status.go:330] ha-322980 host status = "Running" (err=<nil>)
	I0505 21:24:04.817572   35116 host.go:66] Checking if "ha-322980" exists ...
	I0505 21:24:04.817828   35116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:04.817864   35116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:04.832681   35116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40471
	I0505 21:24:04.833053   35116 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:04.833550   35116 main.go:141] libmachine: Using API Version  1
	I0505 21:24:04.833576   35116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:04.833926   35116 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:04.834117   35116 main.go:141] libmachine: (ha-322980) Calling .GetIP
	I0505 21:24:04.836965   35116 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:24:04.837443   35116 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:24:04.837466   35116 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:24:04.837633   35116 host.go:66] Checking if "ha-322980" exists ...
	I0505 21:24:04.837912   35116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:04.837959   35116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:04.852084   35116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45567
	I0505 21:24:04.852456   35116 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:04.852973   35116 main.go:141] libmachine: Using API Version  1
	I0505 21:24:04.852995   35116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:04.853298   35116 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:04.853508   35116 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:24:04.853689   35116 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0505 21:24:04.853722   35116 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:24:04.856312   35116 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:24:04.856797   35116 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:24:04.856849   35116 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:24:04.856959   35116 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:24:04.857124   35116 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:24:04.857253   35116 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:24:04.857370   35116 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa Username:docker}
	I0505 21:24:04.944507   35116 ssh_runner.go:195] Run: systemctl --version
	I0505 21:24:04.950922   35116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 21:24:04.967444   35116 kubeconfig.go:125] found "ha-322980" server: "https://192.168.39.254:8443"
	I0505 21:24:04.967493   35116 api_server.go:166] Checking apiserver status ...
	I0505 21:24:04.967537   35116 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 21:24:04.982888   35116 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1206/cgroup
	W0505 21:24:04.993156   35116 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1206/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0505 21:24:04.993197   35116 ssh_runner.go:195] Run: ls
	I0505 21:24:04.998039   35116 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0505 21:24:05.002123   35116 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0505 21:24:05.002147   35116 status.go:422] ha-322980 apiserver status = Running (err=<nil>)
	I0505 21:24:05.002157   35116 status.go:257] ha-322980 status: &{Name:ha-322980 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0505 21:24:05.002171   35116 status.go:255] checking status of ha-322980-m02 ...
	I0505 21:24:05.002434   35116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:05.002459   35116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:05.017181   35116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39903
	I0505 21:24:05.017583   35116 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:05.018067   35116 main.go:141] libmachine: Using API Version  1
	I0505 21:24:05.018095   35116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:05.018381   35116 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:05.018560   35116 main.go:141] libmachine: (ha-322980-m02) Calling .GetState
	I0505 21:24:05.020145   35116 status.go:330] ha-322980-m02 host status = "Running" (err=<nil>)
	I0505 21:24:05.020165   35116 host.go:66] Checking if "ha-322980-m02" exists ...
	I0505 21:24:05.020474   35116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:05.020529   35116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:05.034344   35116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43831
	I0505 21:24:05.034693   35116 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:05.035097   35116 main.go:141] libmachine: Using API Version  1
	I0505 21:24:05.035116   35116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:05.035421   35116 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:05.035610   35116 main.go:141] libmachine: (ha-322980-m02) Calling .GetIP
	I0505 21:24:05.038313   35116 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:24:05.038746   35116 main.go:141] libmachine: (ha-322980-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:59:b4", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:16:42 +0000 UTC Type:0 Mac:52:54:00:91:59:b4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-322980-m02 Clientid:01:52:54:00:91:59:b4}
	I0505 21:24:05.038774   35116 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:24:05.038926   35116 host.go:66] Checking if "ha-322980-m02" exists ...
	I0505 21:24:05.039208   35116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:05.039247   35116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:05.053471   35116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37123
	I0505 21:24:05.053821   35116 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:05.054274   35116 main.go:141] libmachine: Using API Version  1
	I0505 21:24:05.054293   35116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:05.054576   35116 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:05.054750   35116 main.go:141] libmachine: (ha-322980-m02) Calling .DriverName
	I0505 21:24:05.054923   35116 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0505 21:24:05.054946   35116 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHHostname
	I0505 21:24:05.057633   35116 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:24:05.058042   35116 main.go:141] libmachine: (ha-322980-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:59:b4", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:16:42 +0000 UTC Type:0 Mac:52:54:00:91:59:b4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-322980-m02 Clientid:01:52:54:00:91:59:b4}
	I0505 21:24:05.058078   35116 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:24:05.058243   35116 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHPort
	I0505 21:24:05.058429   35116 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHKeyPath
	I0505 21:24:05.058545   35116 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHUsername
	I0505 21:24:05.058669   35116 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m02/id_rsa Username:docker}
	W0505 21:24:06.395799   35116 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.228:22: connect: no route to host
	I0505 21:24:06.395850   35116 retry.go:31] will retry after 282.392536ms: dial tcp 192.168.39.228:22: connect: no route to host
	W0505 21:24:09.467869   35116 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.228:22: connect: no route to host
	W0505 21:24:09.467967   35116 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.228:22: connect: no route to host
	E0505 21:24:09.467986   35116 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.228:22: connect: no route to host
	I0505 21:24:09.467993   35116 status.go:257] ha-322980-m02 status: &{Name:ha-322980-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0505 21:24:09.468014   35116 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.228:22: connect: no route to host
	I0505 21:24:09.468028   35116 status.go:255] checking status of ha-322980-m03 ...
	I0505 21:24:09.468331   35116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:09.468396   35116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:09.483267   35116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35445
	I0505 21:24:09.483759   35116 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:09.484190   35116 main.go:141] libmachine: Using API Version  1
	I0505 21:24:09.484208   35116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:09.484477   35116 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:09.484649   35116 main.go:141] libmachine: (ha-322980-m03) Calling .GetState
	I0505 21:24:09.486182   35116 status.go:330] ha-322980-m03 host status = "Running" (err=<nil>)
	I0505 21:24:09.486196   35116 host.go:66] Checking if "ha-322980-m03" exists ...
	I0505 21:24:09.486474   35116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:09.486500   35116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:09.502139   35116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40225
	I0505 21:24:09.502539   35116 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:09.502980   35116 main.go:141] libmachine: Using API Version  1
	I0505 21:24:09.503007   35116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:09.503320   35116 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:09.503581   35116 main.go:141] libmachine: (ha-322980-m03) Calling .GetIP
	I0505 21:24:09.506738   35116 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:24:09.507329   35116 main.go:141] libmachine: (ha-322980-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:64:b7", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:18:47 +0000 UTC Type:0 Mac:52:54:00:c6:64:b7 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ha-322980-m03 Clientid:01:52:54:00:c6:64:b7}
	I0505 21:24:09.507366   35116 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined IP address 192.168.39.29 and MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:24:09.507549   35116 host.go:66] Checking if "ha-322980-m03" exists ...
	I0505 21:24:09.507944   35116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:09.507990   35116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:09.522131   35116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36365
	I0505 21:24:09.522508   35116 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:09.522949   35116 main.go:141] libmachine: Using API Version  1
	I0505 21:24:09.522968   35116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:09.523271   35116 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:09.523419   35116 main.go:141] libmachine: (ha-322980-m03) Calling .DriverName
	I0505 21:24:09.523582   35116 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0505 21:24:09.523601   35116 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHHostname
	I0505 21:24:09.526205   35116 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:24:09.526642   35116 main.go:141] libmachine: (ha-322980-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:64:b7", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:18:47 +0000 UTC Type:0 Mac:52:54:00:c6:64:b7 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ha-322980-m03 Clientid:01:52:54:00:c6:64:b7}
	I0505 21:24:09.526679   35116 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined IP address 192.168.39.29 and MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:24:09.526824   35116 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHPort
	I0505 21:24:09.527058   35116 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHKeyPath
	I0505 21:24:09.527263   35116 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHUsername
	I0505 21:24:09.527449   35116 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m03/id_rsa Username:docker}
	I0505 21:24:09.613073   35116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 21:24:09.629717   35116 kubeconfig.go:125] found "ha-322980" server: "https://192.168.39.254:8443"
	I0505 21:24:09.629752   35116 api_server.go:166] Checking apiserver status ...
	I0505 21:24:09.629793   35116 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 21:24:09.645289   35116 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1549/cgroup
	W0505 21:24:09.655560   35116 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1549/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0505 21:24:09.655604   35116 ssh_runner.go:195] Run: ls
	I0505 21:24:09.660768   35116 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0505 21:24:09.665477   35116 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0505 21:24:09.665507   35116 status.go:422] ha-322980-m03 apiserver status = Running (err=<nil>)
	I0505 21:24:09.665517   35116 status.go:257] ha-322980-m03 status: &{Name:ha-322980-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0505 21:24:09.665531   35116 status.go:255] checking status of ha-322980-m04 ...
	I0505 21:24:09.665819   35116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:09.665843   35116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:09.681564   35116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34635
	I0505 21:24:09.681913   35116 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:09.682408   35116 main.go:141] libmachine: Using API Version  1
	I0505 21:24:09.682431   35116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:09.682740   35116 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:09.682925   35116 main.go:141] libmachine: (ha-322980-m04) Calling .GetState
	I0505 21:24:09.684226   35116 status.go:330] ha-322980-m04 host status = "Running" (err=<nil>)
	I0505 21:24:09.684240   35116 host.go:66] Checking if "ha-322980-m04" exists ...
	I0505 21:24:09.684502   35116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:09.684527   35116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:09.699159   35116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37127
	I0505 21:24:09.699541   35116 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:09.700008   35116 main.go:141] libmachine: Using API Version  1
	I0505 21:24:09.700029   35116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:09.700324   35116 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:09.700499   35116 main.go:141] libmachine: (ha-322980-m04) Calling .GetIP
	I0505 21:24:09.703268   35116 main.go:141] libmachine: (ha-322980-m04) DBG | domain ha-322980-m04 has defined MAC address 52:54:00:dd:17:2b in network mk-ha-322980
	I0505 21:24:09.703680   35116 main.go:141] libmachine: (ha-322980-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:17:2b", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:20:13 +0000 UTC Type:0 Mac:52:54:00:dd:17:2b Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-322980-m04 Clientid:01:52:54:00:dd:17:2b}
	I0505 21:24:09.703709   35116 main.go:141] libmachine: (ha-322980-m04) DBG | domain ha-322980-m04 has defined IP address 192.168.39.169 and MAC address 52:54:00:dd:17:2b in network mk-ha-322980
	I0505 21:24:09.703856   35116 host.go:66] Checking if "ha-322980-m04" exists ...
	I0505 21:24:09.704122   35116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:09.704154   35116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:09.718686   35116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34589
	I0505 21:24:09.719034   35116 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:09.719461   35116 main.go:141] libmachine: Using API Version  1
	I0505 21:24:09.719493   35116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:09.719763   35116 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:09.719952   35116 main.go:141] libmachine: (ha-322980-m04) Calling .DriverName
	I0505 21:24:09.720149   35116 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0505 21:24:09.720167   35116 main.go:141] libmachine: (ha-322980-m04) Calling .GetSSHHostname
	I0505 21:24:09.722854   35116 main.go:141] libmachine: (ha-322980-m04) DBG | domain ha-322980-m04 has defined MAC address 52:54:00:dd:17:2b in network mk-ha-322980
	I0505 21:24:09.723267   35116 main.go:141] libmachine: (ha-322980-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:17:2b", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:20:13 +0000 UTC Type:0 Mac:52:54:00:dd:17:2b Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-322980-m04 Clientid:01:52:54:00:dd:17:2b}
	I0505 21:24:09.723293   35116 main.go:141] libmachine: (ha-322980-m04) DBG | domain ha-322980-m04 has defined IP address 192.168.39.169 and MAC address 52:54:00:dd:17:2b in network mk-ha-322980
	I0505 21:24:09.723423   35116 main.go:141] libmachine: (ha-322980-m04) Calling .GetSSHPort
	I0505 21:24:09.723602   35116 main.go:141] libmachine: (ha-322980-m04) Calling .GetSSHKeyPath
	I0505 21:24:09.723748   35116 main.go:141] libmachine: (ha-322980-m04) Calling .GetSSHUsername
	I0505 21:24:09.723869   35116 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m04/id_rsa Username:docker}
	I0505 21:24:09.808346   35116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 21:24:09.826223   35116 status.go:257] ha-322980-m04 status: &{Name:ha-322980-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-322980 status -v=7 --alsologtostderr: exit status 3 (4.403986883s)

                                                
                                                
-- stdout --
	ha-322980
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-322980-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-322980-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-322980-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 21:24:11.873535   35232 out.go:291] Setting OutFile to fd 1 ...
	I0505 21:24:11.873696   35232 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 21:24:11.873706   35232 out.go:304] Setting ErrFile to fd 2...
	I0505 21:24:11.873712   35232 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 21:24:11.873932   35232 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18602-11466/.minikube/bin
	I0505 21:24:11.874105   35232 out.go:298] Setting JSON to false
	I0505 21:24:11.874136   35232 mustload.go:65] Loading cluster: ha-322980
	I0505 21:24:11.874235   35232 notify.go:220] Checking for updates...
	I0505 21:24:11.874565   35232 config.go:182] Loaded profile config "ha-322980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 21:24:11.874583   35232 status.go:255] checking status of ha-322980 ...
	I0505 21:24:11.874994   35232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:11.875072   35232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:11.893131   35232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45373
	I0505 21:24:11.893574   35232 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:11.894244   35232 main.go:141] libmachine: Using API Version  1
	I0505 21:24:11.894276   35232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:11.894651   35232 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:11.894911   35232 main.go:141] libmachine: (ha-322980) Calling .GetState
	I0505 21:24:11.896395   35232 status.go:330] ha-322980 host status = "Running" (err=<nil>)
	I0505 21:24:11.896418   35232 host.go:66] Checking if "ha-322980" exists ...
	I0505 21:24:11.896685   35232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:11.896718   35232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:11.911660   35232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45067
	I0505 21:24:11.912059   35232 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:11.912547   35232 main.go:141] libmachine: Using API Version  1
	I0505 21:24:11.912572   35232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:11.912901   35232 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:11.913260   35232 main.go:141] libmachine: (ha-322980) Calling .GetIP
	I0505 21:24:11.916239   35232 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:24:11.916737   35232 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:24:11.916764   35232 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:24:11.916914   35232 host.go:66] Checking if "ha-322980" exists ...
	I0505 21:24:11.917315   35232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:11.917352   35232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:11.934163   35232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40953
	I0505 21:24:11.934611   35232 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:11.935047   35232 main.go:141] libmachine: Using API Version  1
	I0505 21:24:11.935069   35232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:11.935410   35232 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:11.935561   35232 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:24:11.935714   35232 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0505 21:24:11.935736   35232 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:24:11.938068   35232 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:24:11.938380   35232 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:24:11.938405   35232 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:24:11.938571   35232 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:24:11.938757   35232 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:24:11.938914   35232 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:24:11.939054   35232 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa Username:docker}
	I0505 21:24:12.028485   35232 ssh_runner.go:195] Run: systemctl --version
	I0505 21:24:12.035227   35232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 21:24:12.054551   35232 kubeconfig.go:125] found "ha-322980" server: "https://192.168.39.254:8443"
	I0505 21:24:12.054585   35232 api_server.go:166] Checking apiserver status ...
	I0505 21:24:12.054624   35232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 21:24:12.070816   35232 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1206/cgroup
	W0505 21:24:12.084998   35232 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1206/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0505 21:24:12.085066   35232 ssh_runner.go:195] Run: ls
	I0505 21:24:12.090396   35232 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0505 21:24:12.094884   35232 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0505 21:24:12.094904   35232 status.go:422] ha-322980 apiserver status = Running (err=<nil>)
	I0505 21:24:12.094914   35232 status.go:257] ha-322980 status: &{Name:ha-322980 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0505 21:24:12.094928   35232 status.go:255] checking status of ha-322980-m02 ...
	I0505 21:24:12.095191   35232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:12.095227   35232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:12.112904   35232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42305
	I0505 21:24:12.113356   35232 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:12.113858   35232 main.go:141] libmachine: Using API Version  1
	I0505 21:24:12.113880   35232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:12.114174   35232 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:12.114345   35232 main.go:141] libmachine: (ha-322980-m02) Calling .GetState
	I0505 21:24:12.115785   35232 status.go:330] ha-322980-m02 host status = "Running" (err=<nil>)
	I0505 21:24:12.115799   35232 host.go:66] Checking if "ha-322980-m02" exists ...
	I0505 21:24:12.116152   35232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:12.116188   35232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:12.131230   35232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41435
	I0505 21:24:12.131634   35232 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:12.132051   35232 main.go:141] libmachine: Using API Version  1
	I0505 21:24:12.132068   35232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:12.132329   35232 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:12.132516   35232 main.go:141] libmachine: (ha-322980-m02) Calling .GetIP
	I0505 21:24:12.135274   35232 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:24:12.135804   35232 main.go:141] libmachine: (ha-322980-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:59:b4", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:16:42 +0000 UTC Type:0 Mac:52:54:00:91:59:b4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-322980-m02 Clientid:01:52:54:00:91:59:b4}
	I0505 21:24:12.135834   35232 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:24:12.135994   35232 host.go:66] Checking if "ha-322980-m02" exists ...
	I0505 21:24:12.136279   35232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:12.136320   35232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:12.152217   35232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36233
	I0505 21:24:12.152719   35232 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:12.153253   35232 main.go:141] libmachine: Using API Version  1
	I0505 21:24:12.153278   35232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:12.153608   35232 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:12.153763   35232 main.go:141] libmachine: (ha-322980-m02) Calling .DriverName
	I0505 21:24:12.153974   35232 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0505 21:24:12.153995   35232 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHHostname
	I0505 21:24:12.156939   35232 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:24:12.157381   35232 main.go:141] libmachine: (ha-322980-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:59:b4", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:16:42 +0000 UTC Type:0 Mac:52:54:00:91:59:b4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-322980-m02 Clientid:01:52:54:00:91:59:b4}
	I0505 21:24:12.157420   35232 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:24:12.157483   35232 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHPort
	I0505 21:24:12.157649   35232 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHKeyPath
	I0505 21:24:12.157831   35232 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHUsername
	I0505 21:24:12.157977   35232 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m02/id_rsa Username:docker}
	W0505 21:24:12.539689   35232 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.228:22: connect: no route to host
	I0505 21:24:12.539756   35232 retry.go:31] will retry after 240.519933ms: dial tcp 192.168.39.228:22: connect: no route to host
	W0505 21:24:15.835818   35232 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.228:22: connect: no route to host
	W0505 21:24:15.835913   35232 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.228:22: connect: no route to host
	E0505 21:24:15.835935   35232 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.228:22: connect: no route to host
	I0505 21:24:15.835946   35232 status.go:257] ha-322980-m02 status: &{Name:ha-322980-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0505 21:24:15.835997   35232 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.228:22: connect: no route to host
	I0505 21:24:15.836009   35232 status.go:255] checking status of ha-322980-m03 ...
	I0505 21:24:15.836420   35232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:15.836465   35232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:15.851454   35232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45139
	I0505 21:24:15.851948   35232 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:15.852439   35232 main.go:141] libmachine: Using API Version  1
	I0505 21:24:15.852455   35232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:15.852785   35232 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:15.852996   35232 main.go:141] libmachine: (ha-322980-m03) Calling .GetState
	I0505 21:24:15.854516   35232 status.go:330] ha-322980-m03 host status = "Running" (err=<nil>)
	I0505 21:24:15.854535   35232 host.go:66] Checking if "ha-322980-m03" exists ...
	I0505 21:24:15.854819   35232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:15.854851   35232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:15.869501   35232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33683
	I0505 21:24:15.869950   35232 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:15.870434   35232 main.go:141] libmachine: Using API Version  1
	I0505 21:24:15.870456   35232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:15.870859   35232 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:15.871059   35232 main.go:141] libmachine: (ha-322980-m03) Calling .GetIP
	I0505 21:24:15.874521   35232 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:24:15.875007   35232 main.go:141] libmachine: (ha-322980-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:64:b7", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:18:47 +0000 UTC Type:0 Mac:52:54:00:c6:64:b7 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ha-322980-m03 Clientid:01:52:54:00:c6:64:b7}
	I0505 21:24:15.875032   35232 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined IP address 192.168.39.29 and MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:24:15.875238   35232 host.go:66] Checking if "ha-322980-m03" exists ...
	I0505 21:24:15.875617   35232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:15.875660   35232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:15.890147   35232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37873
	I0505 21:24:15.890540   35232 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:15.890958   35232 main.go:141] libmachine: Using API Version  1
	I0505 21:24:15.890979   35232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:15.891288   35232 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:15.891523   35232 main.go:141] libmachine: (ha-322980-m03) Calling .DriverName
	I0505 21:24:15.891767   35232 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0505 21:24:15.891786   35232 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHHostname
	I0505 21:24:15.894517   35232 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:24:15.894910   35232 main.go:141] libmachine: (ha-322980-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:64:b7", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:18:47 +0000 UTC Type:0 Mac:52:54:00:c6:64:b7 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ha-322980-m03 Clientid:01:52:54:00:c6:64:b7}
	I0505 21:24:15.894944   35232 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined IP address 192.168.39.29 and MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:24:15.895142   35232 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHPort
	I0505 21:24:15.895326   35232 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHKeyPath
	I0505 21:24:15.895459   35232 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHUsername
	I0505 21:24:15.895640   35232 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m03/id_rsa Username:docker}
	I0505 21:24:15.984858   35232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 21:24:16.003646   35232 kubeconfig.go:125] found "ha-322980" server: "https://192.168.39.254:8443"
	I0505 21:24:16.003675   35232 api_server.go:166] Checking apiserver status ...
	I0505 21:24:16.003720   35232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 21:24:16.022050   35232 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1549/cgroup
	W0505 21:24:16.034683   35232 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1549/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0505 21:24:16.034737   35232 ssh_runner.go:195] Run: ls
	I0505 21:24:16.040645   35232 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0505 21:24:16.045152   35232 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0505 21:24:16.045175   35232 status.go:422] ha-322980-m03 apiserver status = Running (err=<nil>)
	I0505 21:24:16.045185   35232 status.go:257] ha-322980-m03 status: &{Name:ha-322980-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0505 21:24:16.045207   35232 status.go:255] checking status of ha-322980-m04 ...
	I0505 21:24:16.045602   35232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:16.045648   35232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:16.063077   35232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46741
	I0505 21:24:16.063513   35232 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:16.064088   35232 main.go:141] libmachine: Using API Version  1
	I0505 21:24:16.064113   35232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:16.064463   35232 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:16.064965   35232 main.go:141] libmachine: (ha-322980-m04) Calling .GetState
	I0505 21:24:16.066953   35232 status.go:330] ha-322980-m04 host status = "Running" (err=<nil>)
	I0505 21:24:16.066971   35232 host.go:66] Checking if "ha-322980-m04" exists ...
	I0505 21:24:16.067304   35232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:16.067360   35232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:16.082488   35232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34401
	I0505 21:24:16.082873   35232 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:16.083307   35232 main.go:141] libmachine: Using API Version  1
	I0505 21:24:16.083328   35232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:16.083668   35232 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:16.083861   35232 main.go:141] libmachine: (ha-322980-m04) Calling .GetIP
	I0505 21:24:16.086737   35232 main.go:141] libmachine: (ha-322980-m04) DBG | domain ha-322980-m04 has defined MAC address 52:54:00:dd:17:2b in network mk-ha-322980
	I0505 21:24:16.087192   35232 main.go:141] libmachine: (ha-322980-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:17:2b", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:20:13 +0000 UTC Type:0 Mac:52:54:00:dd:17:2b Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-322980-m04 Clientid:01:52:54:00:dd:17:2b}
	I0505 21:24:16.087227   35232 main.go:141] libmachine: (ha-322980-m04) DBG | domain ha-322980-m04 has defined IP address 192.168.39.169 and MAC address 52:54:00:dd:17:2b in network mk-ha-322980
	I0505 21:24:16.087365   35232 host.go:66] Checking if "ha-322980-m04" exists ...
	I0505 21:24:16.087718   35232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:16.087756   35232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:16.102514   35232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34957
	I0505 21:24:16.102905   35232 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:16.103419   35232 main.go:141] libmachine: Using API Version  1
	I0505 21:24:16.103441   35232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:16.103761   35232 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:16.103966   35232 main.go:141] libmachine: (ha-322980-m04) Calling .DriverName
	I0505 21:24:16.104163   35232 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0505 21:24:16.104187   35232 main.go:141] libmachine: (ha-322980-m04) Calling .GetSSHHostname
	I0505 21:24:16.107210   35232 main.go:141] libmachine: (ha-322980-m04) DBG | domain ha-322980-m04 has defined MAC address 52:54:00:dd:17:2b in network mk-ha-322980
	I0505 21:24:16.107592   35232 main.go:141] libmachine: (ha-322980-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:17:2b", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:20:13 +0000 UTC Type:0 Mac:52:54:00:dd:17:2b Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-322980-m04 Clientid:01:52:54:00:dd:17:2b}
	I0505 21:24:16.107621   35232 main.go:141] libmachine: (ha-322980-m04) DBG | domain ha-322980-m04 has defined IP address 192.168.39.169 and MAC address 52:54:00:dd:17:2b in network mk-ha-322980
	I0505 21:24:16.107839   35232 main.go:141] libmachine: (ha-322980-m04) Calling .GetSSHPort
	I0505 21:24:16.108023   35232 main.go:141] libmachine: (ha-322980-m04) Calling .GetSSHKeyPath
	I0505 21:24:16.108182   35232 main.go:141] libmachine: (ha-322980-m04) Calling .GetSSHUsername
	I0505 21:24:16.108305   35232 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m04/id_rsa Username:docker}
	I0505 21:24:16.198893   35232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 21:24:16.218472   35232 status.go:257] ha-322980-m04 status: &{Name:ha-322980-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-322980 status -v=7 --alsologtostderr: exit status 3 (4.459058854s)

                                                
                                                
-- stdout --
	ha-322980
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-322980-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-322980-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-322980-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 21:24:18.300840   35332 out.go:291] Setting OutFile to fd 1 ...
	I0505 21:24:18.300949   35332 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 21:24:18.300960   35332 out.go:304] Setting ErrFile to fd 2...
	I0505 21:24:18.300967   35332 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 21:24:18.301193   35332 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18602-11466/.minikube/bin
	I0505 21:24:18.301384   35332 out.go:298] Setting JSON to false
	I0505 21:24:18.301416   35332 mustload.go:65] Loading cluster: ha-322980
	I0505 21:24:18.301524   35332 notify.go:220] Checking for updates...
	I0505 21:24:18.301894   35332 config.go:182] Loaded profile config "ha-322980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 21:24:18.301915   35332 status.go:255] checking status of ha-322980 ...
	I0505 21:24:18.302339   35332 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:18.302427   35332 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:18.319151   35332 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43501
	I0505 21:24:18.319573   35332 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:18.320169   35332 main.go:141] libmachine: Using API Version  1
	I0505 21:24:18.320185   35332 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:18.320630   35332 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:18.320858   35332 main.go:141] libmachine: (ha-322980) Calling .GetState
	I0505 21:24:18.322722   35332 status.go:330] ha-322980 host status = "Running" (err=<nil>)
	I0505 21:24:18.322740   35332 host.go:66] Checking if "ha-322980" exists ...
	I0505 21:24:18.323033   35332 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:18.323067   35332 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:18.338246   35332 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40599
	I0505 21:24:18.338646   35332 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:18.339166   35332 main.go:141] libmachine: Using API Version  1
	I0505 21:24:18.339191   35332 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:18.339570   35332 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:18.339761   35332 main.go:141] libmachine: (ha-322980) Calling .GetIP
	I0505 21:24:18.342754   35332 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:24:18.343246   35332 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:24:18.343263   35332 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:24:18.343407   35332 host.go:66] Checking if "ha-322980" exists ...
	I0505 21:24:18.343908   35332 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:18.343958   35332 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:18.360002   35332 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33685
	I0505 21:24:18.360390   35332 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:18.360964   35332 main.go:141] libmachine: Using API Version  1
	I0505 21:24:18.361005   35332 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:18.361340   35332 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:18.361560   35332 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:24:18.361793   35332 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0505 21:24:18.361835   35332 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:24:18.365172   35332 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:24:18.365606   35332 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:24:18.365632   35332 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:24:18.365823   35332 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:24:18.366005   35332 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:24:18.366179   35332 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:24:18.366316   35332 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa Username:docker}
	I0505 21:24:18.466311   35332 ssh_runner.go:195] Run: systemctl --version
	I0505 21:24:18.475773   35332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 21:24:18.495352   35332 kubeconfig.go:125] found "ha-322980" server: "https://192.168.39.254:8443"
	I0505 21:24:18.495385   35332 api_server.go:166] Checking apiserver status ...
	I0505 21:24:18.495416   35332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 21:24:18.514402   35332 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1206/cgroup
	W0505 21:24:18.529014   35332 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1206/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0505 21:24:18.529075   35332 ssh_runner.go:195] Run: ls
	I0505 21:24:18.534483   35332 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0505 21:24:18.541429   35332 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0505 21:24:18.541450   35332 status.go:422] ha-322980 apiserver status = Running (err=<nil>)
	I0505 21:24:18.541460   35332 status.go:257] ha-322980 status: &{Name:ha-322980 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0505 21:24:18.541486   35332 status.go:255] checking status of ha-322980-m02 ...
	I0505 21:24:18.541781   35332 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:18.541813   35332 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:18.557175   35332 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42885
	I0505 21:24:18.557614   35332 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:18.558104   35332 main.go:141] libmachine: Using API Version  1
	I0505 21:24:18.558125   35332 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:18.558462   35332 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:18.558679   35332 main.go:141] libmachine: (ha-322980-m02) Calling .GetState
	I0505 21:24:18.560446   35332 status.go:330] ha-322980-m02 host status = "Running" (err=<nil>)
	I0505 21:24:18.560467   35332 host.go:66] Checking if "ha-322980-m02" exists ...
	I0505 21:24:18.560761   35332 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:18.560818   35332 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:18.578571   35332 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33809
	I0505 21:24:18.579111   35332 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:18.579680   35332 main.go:141] libmachine: Using API Version  1
	I0505 21:24:18.579706   35332 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:18.580018   35332 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:18.580263   35332 main.go:141] libmachine: (ha-322980-m02) Calling .GetIP
	I0505 21:24:18.583360   35332 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:24:18.583816   35332 main.go:141] libmachine: (ha-322980-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:59:b4", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:16:42 +0000 UTC Type:0 Mac:52:54:00:91:59:b4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-322980-m02 Clientid:01:52:54:00:91:59:b4}
	I0505 21:24:18.583843   35332 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:24:18.583967   35332 host.go:66] Checking if "ha-322980-m02" exists ...
	I0505 21:24:18.584311   35332 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:18.584353   35332 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:18.600635   35332 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45657
	I0505 21:24:18.601025   35332 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:18.601492   35332 main.go:141] libmachine: Using API Version  1
	I0505 21:24:18.601513   35332 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:18.601832   35332 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:18.602037   35332 main.go:141] libmachine: (ha-322980-m02) Calling .DriverName
	I0505 21:24:18.602247   35332 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0505 21:24:18.602266   35332 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHHostname
	I0505 21:24:18.604994   35332 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:24:18.605431   35332 main.go:141] libmachine: (ha-322980-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:59:b4", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:16:42 +0000 UTC Type:0 Mac:52:54:00:91:59:b4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-322980-m02 Clientid:01:52:54:00:91:59:b4}
	I0505 21:24:18.605461   35332 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:24:18.605606   35332 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHPort
	I0505 21:24:18.605725   35332 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHKeyPath
	I0505 21:24:18.605899   35332 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHUsername
	I0505 21:24:18.606039   35332 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m02/id_rsa Username:docker}
	W0505 21:24:18.907732   35332 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.228:22: connect: no route to host
	I0505 21:24:18.907800   35332 retry.go:31] will retry after 367.386478ms: dial tcp 192.168.39.228:22: connect: no route to host
	W0505 21:24:22.331754   35332 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.228:22: connect: no route to host
	W0505 21:24:22.331824   35332 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.228:22: connect: no route to host
	E0505 21:24:22.331854   35332 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.228:22: connect: no route to host
	I0505 21:24:22.331868   35332 status.go:257] ha-322980-m02 status: &{Name:ha-322980-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0505 21:24:22.331900   35332 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.228:22: connect: no route to host
	I0505 21:24:22.331910   35332 status.go:255] checking status of ha-322980-m03 ...
	I0505 21:24:22.332327   35332 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:22.332388   35332 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:22.347556   35332 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33179
	I0505 21:24:22.348042   35332 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:22.348504   35332 main.go:141] libmachine: Using API Version  1
	I0505 21:24:22.348527   35332 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:22.348876   35332 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:22.349069   35332 main.go:141] libmachine: (ha-322980-m03) Calling .GetState
	I0505 21:24:22.351021   35332 status.go:330] ha-322980-m03 host status = "Running" (err=<nil>)
	I0505 21:24:22.351039   35332 host.go:66] Checking if "ha-322980-m03" exists ...
	I0505 21:24:22.351341   35332 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:22.351375   35332 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:22.365577   35332 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36549
	I0505 21:24:22.365925   35332 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:22.366349   35332 main.go:141] libmachine: Using API Version  1
	I0505 21:24:22.366380   35332 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:22.366663   35332 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:22.366818   35332 main.go:141] libmachine: (ha-322980-m03) Calling .GetIP
	I0505 21:24:22.369325   35332 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:24:22.369681   35332 main.go:141] libmachine: (ha-322980-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:64:b7", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:18:47 +0000 UTC Type:0 Mac:52:54:00:c6:64:b7 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ha-322980-m03 Clientid:01:52:54:00:c6:64:b7}
	I0505 21:24:22.369707   35332 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined IP address 192.168.39.29 and MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:24:22.369804   35332 host.go:66] Checking if "ha-322980-m03" exists ...
	I0505 21:24:22.370077   35332 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:22.370108   35332 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:22.385252   35332 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37365
	I0505 21:24:22.385687   35332 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:22.386087   35332 main.go:141] libmachine: Using API Version  1
	I0505 21:24:22.386112   35332 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:22.386375   35332 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:22.386539   35332 main.go:141] libmachine: (ha-322980-m03) Calling .DriverName
	I0505 21:24:22.386734   35332 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0505 21:24:22.386754   35332 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHHostname
	I0505 21:24:22.389815   35332 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:24:22.390225   35332 main.go:141] libmachine: (ha-322980-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:64:b7", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:18:47 +0000 UTC Type:0 Mac:52:54:00:c6:64:b7 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ha-322980-m03 Clientid:01:52:54:00:c6:64:b7}
	I0505 21:24:22.390261   35332 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined IP address 192.168.39.29 and MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:24:22.390409   35332 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHPort
	I0505 21:24:22.390573   35332 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHKeyPath
	I0505 21:24:22.390722   35332 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHUsername
	I0505 21:24:22.390950   35332 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m03/id_rsa Username:docker}
	I0505 21:24:22.476065   35332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 21:24:22.493114   35332 kubeconfig.go:125] found "ha-322980" server: "https://192.168.39.254:8443"
	I0505 21:24:22.493139   35332 api_server.go:166] Checking apiserver status ...
	I0505 21:24:22.493168   35332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 21:24:22.508029   35332 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1549/cgroup
	W0505 21:24:22.518789   35332 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1549/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0505 21:24:22.518835   35332 ssh_runner.go:195] Run: ls
	I0505 21:24:22.524398   35332 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0505 21:24:22.530579   35332 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0505 21:24:22.530613   35332 status.go:422] ha-322980-m03 apiserver status = Running (err=<nil>)
	I0505 21:24:22.530624   35332 status.go:257] ha-322980-m03 status: &{Name:ha-322980-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0505 21:24:22.530642   35332 status.go:255] checking status of ha-322980-m04 ...
	I0505 21:24:22.531063   35332 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:22.531098   35332 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:22.546380   35332 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46347
	I0505 21:24:22.546780   35332 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:22.547252   35332 main.go:141] libmachine: Using API Version  1
	I0505 21:24:22.547276   35332 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:22.547581   35332 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:22.547820   35332 main.go:141] libmachine: (ha-322980-m04) Calling .GetState
	I0505 21:24:22.549309   35332 status.go:330] ha-322980-m04 host status = "Running" (err=<nil>)
	I0505 21:24:22.549321   35332 host.go:66] Checking if "ha-322980-m04" exists ...
	I0505 21:24:22.549593   35332 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:22.549622   35332 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:22.563866   35332 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33417
	I0505 21:24:22.564381   35332 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:22.564896   35332 main.go:141] libmachine: Using API Version  1
	I0505 21:24:22.564923   35332 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:22.565230   35332 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:22.565426   35332 main.go:141] libmachine: (ha-322980-m04) Calling .GetIP
	I0505 21:24:22.568594   35332 main.go:141] libmachine: (ha-322980-m04) DBG | domain ha-322980-m04 has defined MAC address 52:54:00:dd:17:2b in network mk-ha-322980
	I0505 21:24:22.569105   35332 main.go:141] libmachine: (ha-322980-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:17:2b", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:20:13 +0000 UTC Type:0 Mac:52:54:00:dd:17:2b Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-322980-m04 Clientid:01:52:54:00:dd:17:2b}
	I0505 21:24:22.569139   35332 main.go:141] libmachine: (ha-322980-m04) DBG | domain ha-322980-m04 has defined IP address 192.168.39.169 and MAC address 52:54:00:dd:17:2b in network mk-ha-322980
	I0505 21:24:22.569296   35332 host.go:66] Checking if "ha-322980-m04" exists ...
	I0505 21:24:22.569589   35332 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:22.569645   35332 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:22.584273   35332 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43731
	I0505 21:24:22.584630   35332 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:22.585022   35332 main.go:141] libmachine: Using API Version  1
	I0505 21:24:22.585045   35332 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:22.585353   35332 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:22.585536   35332 main.go:141] libmachine: (ha-322980-m04) Calling .DriverName
	I0505 21:24:22.585680   35332 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0505 21:24:22.585701   35332 main.go:141] libmachine: (ha-322980-m04) Calling .GetSSHHostname
	I0505 21:24:22.588409   35332 main.go:141] libmachine: (ha-322980-m04) DBG | domain ha-322980-m04 has defined MAC address 52:54:00:dd:17:2b in network mk-ha-322980
	I0505 21:24:22.588904   35332 main.go:141] libmachine: (ha-322980-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:17:2b", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:20:13 +0000 UTC Type:0 Mac:52:54:00:dd:17:2b Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-322980-m04 Clientid:01:52:54:00:dd:17:2b}
	I0505 21:24:22.588928   35332 main.go:141] libmachine: (ha-322980-m04) DBG | domain ha-322980-m04 has defined IP address 192.168.39.169 and MAC address 52:54:00:dd:17:2b in network mk-ha-322980
	I0505 21:24:22.589122   35332 main.go:141] libmachine: (ha-322980-m04) Calling .GetSSHPort
	I0505 21:24:22.589289   35332 main.go:141] libmachine: (ha-322980-m04) Calling .GetSSHKeyPath
	I0505 21:24:22.589402   35332 main.go:141] libmachine: (ha-322980-m04) Calling .GetSSHUsername
	I0505 21:24:22.589541   35332 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m04/id_rsa Username:docker}
	I0505 21:24:22.678517   35332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 21:24:22.695021   35332 status.go:257] ha-322980-m04 status: &{Name:ha-322980-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-322980 status -v=7 --alsologtostderr: exit status 3 (3.783025423s)

                                                
                                                
-- stdout --
	ha-322980
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-322980-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-322980-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-322980-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 21:24:26.137149   35451 out.go:291] Setting OutFile to fd 1 ...
	I0505 21:24:26.137254   35451 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 21:24:26.137265   35451 out.go:304] Setting ErrFile to fd 2...
	I0505 21:24:26.137270   35451 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 21:24:26.137486   35451 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18602-11466/.minikube/bin
	I0505 21:24:26.137689   35451 out.go:298] Setting JSON to false
	I0505 21:24:26.137717   35451 mustload.go:65] Loading cluster: ha-322980
	I0505 21:24:26.137831   35451 notify.go:220] Checking for updates...
	I0505 21:24:26.138178   35451 config.go:182] Loaded profile config "ha-322980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 21:24:26.138193   35451 status.go:255] checking status of ha-322980 ...
	I0505 21:24:26.138628   35451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:26.138673   35451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:26.158082   35451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46363
	I0505 21:24:26.158534   35451 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:26.159129   35451 main.go:141] libmachine: Using API Version  1
	I0505 21:24:26.159148   35451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:26.159564   35451 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:26.159759   35451 main.go:141] libmachine: (ha-322980) Calling .GetState
	I0505 21:24:26.161450   35451 status.go:330] ha-322980 host status = "Running" (err=<nil>)
	I0505 21:24:26.161481   35451 host.go:66] Checking if "ha-322980" exists ...
	I0505 21:24:26.161749   35451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:26.161772   35451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:26.177834   35451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39185
	I0505 21:24:26.178246   35451 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:26.178742   35451 main.go:141] libmachine: Using API Version  1
	I0505 21:24:26.178771   35451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:26.179072   35451 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:26.179312   35451 main.go:141] libmachine: (ha-322980) Calling .GetIP
	I0505 21:24:26.182066   35451 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:24:26.182487   35451 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:24:26.182516   35451 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:24:26.182617   35451 host.go:66] Checking if "ha-322980" exists ...
	I0505 21:24:26.182887   35451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:26.182935   35451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:26.197301   35451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44193
	I0505 21:24:26.197680   35451 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:26.198155   35451 main.go:141] libmachine: Using API Version  1
	I0505 21:24:26.198175   35451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:26.198439   35451 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:26.198616   35451 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:24:26.198879   35451 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0505 21:24:26.198920   35451 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:24:26.201648   35451 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:24:26.202076   35451 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:24:26.202108   35451 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:24:26.202208   35451 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:24:26.202366   35451 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:24:26.202503   35451 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:24:26.202637   35451 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa Username:docker}
	I0505 21:24:26.289990   35451 ssh_runner.go:195] Run: systemctl --version
	I0505 21:24:26.296986   35451 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 21:24:26.318216   35451 kubeconfig.go:125] found "ha-322980" server: "https://192.168.39.254:8443"
	I0505 21:24:26.318257   35451 api_server.go:166] Checking apiserver status ...
	I0505 21:24:26.318303   35451 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 21:24:26.342289   35451 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1206/cgroup
	W0505 21:24:26.353688   35451 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1206/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0505 21:24:26.353759   35451 ssh_runner.go:195] Run: ls
	I0505 21:24:26.359693   35451 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0505 21:24:26.372333   35451 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0505 21:24:26.372357   35451 status.go:422] ha-322980 apiserver status = Running (err=<nil>)
	I0505 21:24:26.372367   35451 status.go:257] ha-322980 status: &{Name:ha-322980 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0505 21:24:26.372384   35451 status.go:255] checking status of ha-322980-m02 ...
	I0505 21:24:26.372719   35451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:26.372746   35451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:26.387941   35451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46473
	I0505 21:24:26.388420   35451 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:26.388950   35451 main.go:141] libmachine: Using API Version  1
	I0505 21:24:26.388971   35451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:26.389314   35451 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:26.389532   35451 main.go:141] libmachine: (ha-322980-m02) Calling .GetState
	I0505 21:24:26.390891   35451 status.go:330] ha-322980-m02 host status = "Running" (err=<nil>)
	I0505 21:24:26.390902   35451 host.go:66] Checking if "ha-322980-m02" exists ...
	I0505 21:24:26.391207   35451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:26.391244   35451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:26.408186   35451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45835
	I0505 21:24:26.408617   35451 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:26.409094   35451 main.go:141] libmachine: Using API Version  1
	I0505 21:24:26.409109   35451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:26.409425   35451 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:26.409579   35451 main.go:141] libmachine: (ha-322980-m02) Calling .GetIP
	I0505 21:24:26.412190   35451 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:24:26.412651   35451 main.go:141] libmachine: (ha-322980-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:59:b4", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:16:42 +0000 UTC Type:0 Mac:52:54:00:91:59:b4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-322980-m02 Clientid:01:52:54:00:91:59:b4}
	I0505 21:24:26.412680   35451 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:24:26.412832   35451 host.go:66] Checking if "ha-322980-m02" exists ...
	I0505 21:24:26.413168   35451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:26.413205   35451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:26.427569   35451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46719
	I0505 21:24:26.427958   35451 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:26.428403   35451 main.go:141] libmachine: Using API Version  1
	I0505 21:24:26.428428   35451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:26.428806   35451 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:26.428989   35451 main.go:141] libmachine: (ha-322980-m02) Calling .DriverName
	I0505 21:24:26.429189   35451 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0505 21:24:26.429210   35451 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHHostname
	I0505 21:24:26.431882   35451 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:24:26.432352   35451 main.go:141] libmachine: (ha-322980-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:59:b4", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:16:42 +0000 UTC Type:0 Mac:52:54:00:91:59:b4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-322980-m02 Clientid:01:52:54:00:91:59:b4}
	I0505 21:24:26.432370   35451 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:24:26.432524   35451 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHPort
	I0505 21:24:26.432692   35451 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHKeyPath
	I0505 21:24:26.432879   35451 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHUsername
	I0505 21:24:26.433029   35451 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m02/id_rsa Username:docker}
	W0505 21:24:29.499716   35451 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.228:22: connect: no route to host
	W0505 21:24:29.499827   35451 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.228:22: connect: no route to host
	E0505 21:24:29.499850   35451 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.228:22: connect: no route to host
	I0505 21:24:29.499864   35451 status.go:257] ha-322980-m02 status: &{Name:ha-322980-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0505 21:24:29.499884   35451 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.228:22: connect: no route to host
	I0505 21:24:29.499893   35451 status.go:255] checking status of ha-322980-m03 ...
	I0505 21:24:29.500206   35451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:29.500258   35451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:29.514953   35451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41875
	I0505 21:24:29.515516   35451 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:29.516090   35451 main.go:141] libmachine: Using API Version  1
	I0505 21:24:29.516114   35451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:29.516479   35451 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:29.516725   35451 main.go:141] libmachine: (ha-322980-m03) Calling .GetState
	I0505 21:24:29.518650   35451 status.go:330] ha-322980-m03 host status = "Running" (err=<nil>)
	I0505 21:24:29.518667   35451 host.go:66] Checking if "ha-322980-m03" exists ...
	I0505 21:24:29.519098   35451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:29.519150   35451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:29.533136   35451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37433
	I0505 21:24:29.533511   35451 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:29.533933   35451 main.go:141] libmachine: Using API Version  1
	I0505 21:24:29.533953   35451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:29.534229   35451 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:29.534387   35451 main.go:141] libmachine: (ha-322980-m03) Calling .GetIP
	I0505 21:24:29.536859   35451 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:24:29.537208   35451 main.go:141] libmachine: (ha-322980-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:64:b7", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:18:47 +0000 UTC Type:0 Mac:52:54:00:c6:64:b7 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ha-322980-m03 Clientid:01:52:54:00:c6:64:b7}
	I0505 21:24:29.537230   35451 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined IP address 192.168.39.29 and MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:24:29.537361   35451 host.go:66] Checking if "ha-322980-m03" exists ...
	I0505 21:24:29.537620   35451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:29.537650   35451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:29.551915   35451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46507
	I0505 21:24:29.552290   35451 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:29.552816   35451 main.go:141] libmachine: Using API Version  1
	I0505 21:24:29.552835   35451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:29.553097   35451 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:29.553317   35451 main.go:141] libmachine: (ha-322980-m03) Calling .DriverName
	I0505 21:24:29.553476   35451 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0505 21:24:29.553493   35451 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHHostname
	I0505 21:24:29.556204   35451 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:24:29.556685   35451 main.go:141] libmachine: (ha-322980-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:64:b7", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:18:47 +0000 UTC Type:0 Mac:52:54:00:c6:64:b7 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ha-322980-m03 Clientid:01:52:54:00:c6:64:b7}
	I0505 21:24:29.556732   35451 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined IP address 192.168.39.29 and MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:24:29.556934   35451 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHPort
	I0505 21:24:29.557100   35451 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHKeyPath
	I0505 21:24:29.557253   35451 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHUsername
	I0505 21:24:29.557389   35451 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m03/id_rsa Username:docker}
	I0505 21:24:29.644795   35451 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 21:24:29.664067   35451 kubeconfig.go:125] found "ha-322980" server: "https://192.168.39.254:8443"
	I0505 21:24:29.664093   35451 api_server.go:166] Checking apiserver status ...
	I0505 21:24:29.664124   35451 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 21:24:29.682865   35451 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1549/cgroup
	W0505 21:24:29.693454   35451 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1549/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0505 21:24:29.693505   35451 ssh_runner.go:195] Run: ls
	I0505 21:24:29.698651   35451 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0505 21:24:29.705171   35451 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0505 21:24:29.705191   35451 status.go:422] ha-322980-m03 apiserver status = Running (err=<nil>)
	I0505 21:24:29.705202   35451 status.go:257] ha-322980-m03 status: &{Name:ha-322980-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0505 21:24:29.705220   35451 status.go:255] checking status of ha-322980-m04 ...
	I0505 21:24:29.705499   35451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:29.705530   35451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:29.720691   35451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42185
	I0505 21:24:29.721172   35451 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:29.721621   35451 main.go:141] libmachine: Using API Version  1
	I0505 21:24:29.721641   35451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:29.721931   35451 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:29.722157   35451 main.go:141] libmachine: (ha-322980-m04) Calling .GetState
	I0505 21:24:29.723769   35451 status.go:330] ha-322980-m04 host status = "Running" (err=<nil>)
	I0505 21:24:29.723783   35451 host.go:66] Checking if "ha-322980-m04" exists ...
	I0505 21:24:29.724155   35451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:29.724218   35451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:29.739236   35451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46181
	I0505 21:24:29.739692   35451 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:29.740166   35451 main.go:141] libmachine: Using API Version  1
	I0505 21:24:29.740183   35451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:29.740498   35451 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:29.740700   35451 main.go:141] libmachine: (ha-322980-m04) Calling .GetIP
	I0505 21:24:29.743505   35451 main.go:141] libmachine: (ha-322980-m04) DBG | domain ha-322980-m04 has defined MAC address 52:54:00:dd:17:2b in network mk-ha-322980
	I0505 21:24:29.744014   35451 main.go:141] libmachine: (ha-322980-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:17:2b", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:20:13 +0000 UTC Type:0 Mac:52:54:00:dd:17:2b Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-322980-m04 Clientid:01:52:54:00:dd:17:2b}
	I0505 21:24:29.744049   35451 main.go:141] libmachine: (ha-322980-m04) DBG | domain ha-322980-m04 has defined IP address 192.168.39.169 and MAC address 52:54:00:dd:17:2b in network mk-ha-322980
	I0505 21:24:29.744203   35451 host.go:66] Checking if "ha-322980-m04" exists ...
	I0505 21:24:29.744584   35451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:29.744633   35451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:29.758118   35451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36789
	I0505 21:24:29.758471   35451 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:29.758888   35451 main.go:141] libmachine: Using API Version  1
	I0505 21:24:29.758910   35451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:29.759267   35451 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:29.759459   35451 main.go:141] libmachine: (ha-322980-m04) Calling .DriverName
	I0505 21:24:29.759628   35451 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0505 21:24:29.759645   35451 main.go:141] libmachine: (ha-322980-m04) Calling .GetSSHHostname
	I0505 21:24:29.762433   35451 main.go:141] libmachine: (ha-322980-m04) DBG | domain ha-322980-m04 has defined MAC address 52:54:00:dd:17:2b in network mk-ha-322980
	I0505 21:24:29.762868   35451 main.go:141] libmachine: (ha-322980-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:17:2b", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:20:13 +0000 UTC Type:0 Mac:52:54:00:dd:17:2b Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-322980-m04 Clientid:01:52:54:00:dd:17:2b}
	I0505 21:24:29.762894   35451 main.go:141] libmachine: (ha-322980-m04) DBG | domain ha-322980-m04 has defined IP address 192.168.39.169 and MAC address 52:54:00:dd:17:2b in network mk-ha-322980
	I0505 21:24:29.763042   35451 main.go:141] libmachine: (ha-322980-m04) Calling .GetSSHPort
	I0505 21:24:29.763202   35451 main.go:141] libmachine: (ha-322980-m04) Calling .GetSSHKeyPath
	I0505 21:24:29.763326   35451 main.go:141] libmachine: (ha-322980-m04) Calling .GetSSHUsername
	I0505 21:24:29.763467   35451 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m04/id_rsa Username:docker}
	I0505 21:24:29.847363   35451 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 21:24:29.863770   35451 status.go:257] ha-322980-m04 status: &{Name:ha-322980-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
E0505 21:24:31.829613   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/functional-273789/client.crt: no such file or directory
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-322980 status -v=7 --alsologtostderr: exit status 7 (656.963247ms)

                                                
                                                
-- stdout --
	ha-322980
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-322980-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-322980-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-322980-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 21:24:40.243720   35601 out.go:291] Setting OutFile to fd 1 ...
	I0505 21:24:40.243853   35601 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 21:24:40.243867   35601 out.go:304] Setting ErrFile to fd 2...
	I0505 21:24:40.243875   35601 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 21:24:40.244087   35601 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18602-11466/.minikube/bin
	I0505 21:24:40.244288   35601 out.go:298] Setting JSON to false
	I0505 21:24:40.244319   35601 mustload.go:65] Loading cluster: ha-322980
	I0505 21:24:40.244359   35601 notify.go:220] Checking for updates...
	I0505 21:24:40.244675   35601 config.go:182] Loaded profile config "ha-322980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 21:24:40.244690   35601 status.go:255] checking status of ha-322980 ...
	I0505 21:24:40.245139   35601 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:40.245198   35601 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:40.260076   35601 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45097
	I0505 21:24:40.260444   35601 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:40.260990   35601 main.go:141] libmachine: Using API Version  1
	I0505 21:24:40.261014   35601 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:40.261469   35601 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:40.261700   35601 main.go:141] libmachine: (ha-322980) Calling .GetState
	I0505 21:24:40.263364   35601 status.go:330] ha-322980 host status = "Running" (err=<nil>)
	I0505 21:24:40.263378   35601 host.go:66] Checking if "ha-322980" exists ...
	I0505 21:24:40.263673   35601 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:40.263728   35601 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:40.278982   35601 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37861
	I0505 21:24:40.279305   35601 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:40.279729   35601 main.go:141] libmachine: Using API Version  1
	I0505 21:24:40.279749   35601 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:40.280020   35601 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:40.280185   35601 main.go:141] libmachine: (ha-322980) Calling .GetIP
	I0505 21:24:40.283005   35601 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:24:40.283420   35601 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:24:40.283452   35601 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:24:40.283593   35601 host.go:66] Checking if "ha-322980" exists ...
	I0505 21:24:40.283962   35601 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:40.284005   35601 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:40.298029   35601 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42413
	I0505 21:24:40.298350   35601 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:40.298859   35601 main.go:141] libmachine: Using API Version  1
	I0505 21:24:40.298887   35601 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:40.299183   35601 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:40.299365   35601 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:24:40.299538   35601 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0505 21:24:40.299575   35601 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:24:40.302418   35601 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:24:40.302861   35601 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:24:40.302883   35601 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:24:40.303086   35601 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:24:40.303232   35601 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:24:40.303399   35601 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:24:40.303584   35601 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa Username:docker}
	I0505 21:24:40.392386   35601 ssh_runner.go:195] Run: systemctl --version
	I0505 21:24:40.403732   35601 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 21:24:40.420699   35601 kubeconfig.go:125] found "ha-322980" server: "https://192.168.39.254:8443"
	I0505 21:24:40.420742   35601 api_server.go:166] Checking apiserver status ...
	I0505 21:24:40.420774   35601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 21:24:40.437014   35601 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1206/cgroup
	W0505 21:24:40.447574   35601 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1206/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0505 21:24:40.447613   35601 ssh_runner.go:195] Run: ls
	I0505 21:24:40.453299   35601 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0505 21:24:40.458585   35601 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0505 21:24:40.458605   35601 status.go:422] ha-322980 apiserver status = Running (err=<nil>)
	I0505 21:24:40.458617   35601 status.go:257] ha-322980 status: &{Name:ha-322980 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0505 21:24:40.458638   35601 status.go:255] checking status of ha-322980-m02 ...
	I0505 21:24:40.459053   35601 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:40.459109   35601 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:40.473396   35601 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37185
	I0505 21:24:40.473768   35601 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:40.474307   35601 main.go:141] libmachine: Using API Version  1
	I0505 21:24:40.474336   35601 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:40.474635   35601 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:40.474854   35601 main.go:141] libmachine: (ha-322980-m02) Calling .GetState
	I0505 21:24:40.476190   35601 status.go:330] ha-322980-m02 host status = "Stopped" (err=<nil>)
	I0505 21:24:40.476201   35601 status.go:343] host is not running, skipping remaining checks
	I0505 21:24:40.476206   35601 status.go:257] ha-322980-m02 status: &{Name:ha-322980-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0505 21:24:40.476221   35601 status.go:255] checking status of ha-322980-m03 ...
	I0505 21:24:40.476594   35601 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:40.476639   35601 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:40.491256   35601 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45279
	I0505 21:24:40.491674   35601 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:40.492159   35601 main.go:141] libmachine: Using API Version  1
	I0505 21:24:40.492174   35601 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:40.492481   35601 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:40.492697   35601 main.go:141] libmachine: (ha-322980-m03) Calling .GetState
	I0505 21:24:40.494518   35601 status.go:330] ha-322980-m03 host status = "Running" (err=<nil>)
	I0505 21:24:40.494532   35601 host.go:66] Checking if "ha-322980-m03" exists ...
	I0505 21:24:40.494808   35601 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:40.494840   35601 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:40.510409   35601 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33907
	I0505 21:24:40.510832   35601 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:40.511316   35601 main.go:141] libmachine: Using API Version  1
	I0505 21:24:40.511350   35601 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:40.511674   35601 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:40.511864   35601 main.go:141] libmachine: (ha-322980-m03) Calling .GetIP
	I0505 21:24:40.514575   35601 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:24:40.515026   35601 main.go:141] libmachine: (ha-322980-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:64:b7", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:18:47 +0000 UTC Type:0 Mac:52:54:00:c6:64:b7 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ha-322980-m03 Clientid:01:52:54:00:c6:64:b7}
	I0505 21:24:40.515055   35601 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined IP address 192.168.39.29 and MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:24:40.515217   35601 host.go:66] Checking if "ha-322980-m03" exists ...
	I0505 21:24:40.515690   35601 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:40.515732   35601 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:40.530118   35601 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36951
	I0505 21:24:40.530448   35601 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:40.530997   35601 main.go:141] libmachine: Using API Version  1
	I0505 21:24:40.531020   35601 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:40.531353   35601 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:40.531545   35601 main.go:141] libmachine: (ha-322980-m03) Calling .DriverName
	I0505 21:24:40.531748   35601 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0505 21:24:40.531768   35601 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHHostname
	I0505 21:24:40.534422   35601 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:24:40.534911   35601 main.go:141] libmachine: (ha-322980-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:64:b7", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:18:47 +0000 UTC Type:0 Mac:52:54:00:c6:64:b7 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ha-322980-m03 Clientid:01:52:54:00:c6:64:b7}
	I0505 21:24:40.534940   35601 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined IP address 192.168.39.29 and MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:24:40.535101   35601 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHPort
	I0505 21:24:40.535274   35601 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHKeyPath
	I0505 21:24:40.535427   35601 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHUsername
	I0505 21:24:40.535568   35601 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m03/id_rsa Username:docker}
	I0505 21:24:40.620590   35601 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 21:24:40.638065   35601 kubeconfig.go:125] found "ha-322980" server: "https://192.168.39.254:8443"
	I0505 21:24:40.638089   35601 api_server.go:166] Checking apiserver status ...
	I0505 21:24:40.638123   35601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 21:24:40.654512   35601 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1549/cgroup
	W0505 21:24:40.669909   35601 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1549/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0505 21:24:40.669966   35601 ssh_runner.go:195] Run: ls
	I0505 21:24:40.675760   35601 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0505 21:24:40.680338   35601 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0505 21:24:40.680358   35601 status.go:422] ha-322980-m03 apiserver status = Running (err=<nil>)
	I0505 21:24:40.680366   35601 status.go:257] ha-322980-m03 status: &{Name:ha-322980-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0505 21:24:40.680378   35601 status.go:255] checking status of ha-322980-m04 ...
	I0505 21:24:40.680647   35601 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:40.680693   35601 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:40.695913   35601 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41025
	I0505 21:24:40.696374   35601 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:40.696891   35601 main.go:141] libmachine: Using API Version  1
	I0505 21:24:40.696914   35601 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:40.697271   35601 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:40.697450   35601 main.go:141] libmachine: (ha-322980-m04) Calling .GetState
	I0505 21:24:40.698880   35601 status.go:330] ha-322980-m04 host status = "Running" (err=<nil>)
	I0505 21:24:40.698896   35601 host.go:66] Checking if "ha-322980-m04" exists ...
	I0505 21:24:40.699180   35601 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:40.699227   35601 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:40.713031   35601 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46313
	I0505 21:24:40.713409   35601 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:40.713862   35601 main.go:141] libmachine: Using API Version  1
	I0505 21:24:40.713888   35601 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:40.714202   35601 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:40.714401   35601 main.go:141] libmachine: (ha-322980-m04) Calling .GetIP
	I0505 21:24:40.717100   35601 main.go:141] libmachine: (ha-322980-m04) DBG | domain ha-322980-m04 has defined MAC address 52:54:00:dd:17:2b in network mk-ha-322980
	I0505 21:24:40.717489   35601 main.go:141] libmachine: (ha-322980-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:17:2b", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:20:13 +0000 UTC Type:0 Mac:52:54:00:dd:17:2b Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-322980-m04 Clientid:01:52:54:00:dd:17:2b}
	I0505 21:24:40.717512   35601 main.go:141] libmachine: (ha-322980-m04) DBG | domain ha-322980-m04 has defined IP address 192.168.39.169 and MAC address 52:54:00:dd:17:2b in network mk-ha-322980
	I0505 21:24:40.717696   35601 host.go:66] Checking if "ha-322980-m04" exists ...
	I0505 21:24:40.717979   35601 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:40.718010   35601 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:40.732697   35601 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43687
	I0505 21:24:40.733070   35601 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:40.733471   35601 main.go:141] libmachine: Using API Version  1
	I0505 21:24:40.733488   35601 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:40.733751   35601 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:40.733926   35601 main.go:141] libmachine: (ha-322980-m04) Calling .DriverName
	I0505 21:24:40.734078   35601 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0505 21:24:40.734102   35601 main.go:141] libmachine: (ha-322980-m04) Calling .GetSSHHostname
	I0505 21:24:40.736718   35601 main.go:141] libmachine: (ha-322980-m04) DBG | domain ha-322980-m04 has defined MAC address 52:54:00:dd:17:2b in network mk-ha-322980
	I0505 21:24:40.737118   35601 main.go:141] libmachine: (ha-322980-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:17:2b", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:20:13 +0000 UTC Type:0 Mac:52:54:00:dd:17:2b Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-322980-m04 Clientid:01:52:54:00:dd:17:2b}
	I0505 21:24:40.737142   35601 main.go:141] libmachine: (ha-322980-m04) DBG | domain ha-322980-m04 has defined IP address 192.168.39.169 and MAC address 52:54:00:dd:17:2b in network mk-ha-322980
	I0505 21:24:40.737309   35601 main.go:141] libmachine: (ha-322980-m04) Calling .GetSSHPort
	I0505 21:24:40.737489   35601 main.go:141] libmachine: (ha-322980-m04) Calling .GetSSHKeyPath
	I0505 21:24:40.737622   35601 main.go:141] libmachine: (ha-322980-m04) Calling .GetSSHUsername
	I0505 21:24:40.737818   35601 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m04/id_rsa Username:docker}
	I0505 21:24:40.827778   35601 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 21:24:40.844283   35601 status.go:257] ha-322980-m04 status: &{Name:ha-322980-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-322980 status -v=7 --alsologtostderr: exit status 7 (668.909ms)

                                                
                                                
-- stdout --
	ha-322980
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-322980-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-322980-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-322980-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 21:24:46.790193   35690 out.go:291] Setting OutFile to fd 1 ...
	I0505 21:24:46.790469   35690 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 21:24:46.790481   35690 out.go:304] Setting ErrFile to fd 2...
	I0505 21:24:46.790488   35690 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 21:24:46.790769   35690 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18602-11466/.minikube/bin
	I0505 21:24:46.790980   35690 out.go:298] Setting JSON to false
	I0505 21:24:46.791015   35690 mustload.go:65] Loading cluster: ha-322980
	I0505 21:24:46.791074   35690 notify.go:220] Checking for updates...
	I0505 21:24:46.791454   35690 config.go:182] Loaded profile config "ha-322980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 21:24:46.791469   35690 status.go:255] checking status of ha-322980 ...
	I0505 21:24:46.791919   35690 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:46.791978   35690 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:46.806923   35690 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40309
	I0505 21:24:46.807384   35690 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:46.808015   35690 main.go:141] libmachine: Using API Version  1
	I0505 21:24:46.808039   35690 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:46.808416   35690 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:46.808682   35690 main.go:141] libmachine: (ha-322980) Calling .GetState
	I0505 21:24:46.810501   35690 status.go:330] ha-322980 host status = "Running" (err=<nil>)
	I0505 21:24:46.810522   35690 host.go:66] Checking if "ha-322980" exists ...
	I0505 21:24:46.810827   35690 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:46.810865   35690 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:46.829576   35690 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43439
	I0505 21:24:46.829979   35690 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:46.830440   35690 main.go:141] libmachine: Using API Version  1
	I0505 21:24:46.830476   35690 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:46.830829   35690 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:46.830996   35690 main.go:141] libmachine: (ha-322980) Calling .GetIP
	I0505 21:24:46.833941   35690 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:24:46.834431   35690 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:24:46.834461   35690 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:24:46.834689   35690 host.go:66] Checking if "ha-322980" exists ...
	I0505 21:24:46.835075   35690 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:46.835122   35690 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:46.851463   35690 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36883
	I0505 21:24:46.851953   35690 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:46.852474   35690 main.go:141] libmachine: Using API Version  1
	I0505 21:24:46.852496   35690 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:46.852803   35690 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:46.852985   35690 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:24:46.853160   35690 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0505 21:24:46.853195   35690 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:24:46.855848   35690 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:24:46.856289   35690 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:24:46.856319   35690 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:24:46.856429   35690 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:24:46.856599   35690 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:24:46.856786   35690 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:24:46.856897   35690 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa Username:docker}
	I0505 21:24:46.945915   35690 ssh_runner.go:195] Run: systemctl --version
	I0505 21:24:46.952865   35690 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 21:24:46.969940   35690 kubeconfig.go:125] found "ha-322980" server: "https://192.168.39.254:8443"
	I0505 21:24:46.969968   35690 api_server.go:166] Checking apiserver status ...
	I0505 21:24:46.970003   35690 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 21:24:46.986820   35690 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1206/cgroup
	W0505 21:24:46.997313   35690 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1206/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0505 21:24:46.997372   35690 ssh_runner.go:195] Run: ls
	I0505 21:24:47.002300   35690 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0505 21:24:47.010245   35690 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0505 21:24:47.010271   35690 status.go:422] ha-322980 apiserver status = Running (err=<nil>)
	I0505 21:24:47.010284   35690 status.go:257] ha-322980 status: &{Name:ha-322980 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0505 21:24:47.010304   35690 status.go:255] checking status of ha-322980-m02 ...
	I0505 21:24:47.010587   35690 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:47.010625   35690 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:47.025242   35690 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37711
	I0505 21:24:47.025657   35690 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:47.026062   35690 main.go:141] libmachine: Using API Version  1
	I0505 21:24:47.026085   35690 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:47.026470   35690 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:47.026652   35690 main.go:141] libmachine: (ha-322980-m02) Calling .GetState
	I0505 21:24:47.028148   35690 status.go:330] ha-322980-m02 host status = "Stopped" (err=<nil>)
	I0505 21:24:47.028165   35690 status.go:343] host is not running, skipping remaining checks
	I0505 21:24:47.028173   35690 status.go:257] ha-322980-m02 status: &{Name:ha-322980-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0505 21:24:47.028206   35690 status.go:255] checking status of ha-322980-m03 ...
	I0505 21:24:47.028597   35690 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:47.028645   35690 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:47.042804   35690 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36897
	I0505 21:24:47.043230   35690 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:47.043697   35690 main.go:141] libmachine: Using API Version  1
	I0505 21:24:47.043727   35690 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:47.044073   35690 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:47.044274   35690 main.go:141] libmachine: (ha-322980-m03) Calling .GetState
	I0505 21:24:47.045821   35690 status.go:330] ha-322980-m03 host status = "Running" (err=<nil>)
	I0505 21:24:47.045837   35690 host.go:66] Checking if "ha-322980-m03" exists ...
	I0505 21:24:47.046122   35690 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:47.046153   35690 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:47.063303   35690 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45647
	I0505 21:24:47.063716   35690 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:47.064273   35690 main.go:141] libmachine: Using API Version  1
	I0505 21:24:47.064297   35690 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:47.064690   35690 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:47.064907   35690 main.go:141] libmachine: (ha-322980-m03) Calling .GetIP
	I0505 21:24:47.068046   35690 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:24:47.068466   35690 main.go:141] libmachine: (ha-322980-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:64:b7", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:18:47 +0000 UTC Type:0 Mac:52:54:00:c6:64:b7 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ha-322980-m03 Clientid:01:52:54:00:c6:64:b7}
	I0505 21:24:47.068483   35690 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined IP address 192.168.39.29 and MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:24:47.068630   35690 host.go:66] Checking if "ha-322980-m03" exists ...
	I0505 21:24:47.069064   35690 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:47.069110   35690 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:47.083814   35690 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44877
	I0505 21:24:47.084104   35690 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:47.084532   35690 main.go:141] libmachine: Using API Version  1
	I0505 21:24:47.084550   35690 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:47.084805   35690 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:47.085029   35690 main.go:141] libmachine: (ha-322980-m03) Calling .DriverName
	I0505 21:24:47.085173   35690 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0505 21:24:47.085191   35690 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHHostname
	I0505 21:24:47.087653   35690 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:24:47.088066   35690 main.go:141] libmachine: (ha-322980-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:64:b7", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:18:47 +0000 UTC Type:0 Mac:52:54:00:c6:64:b7 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ha-322980-m03 Clientid:01:52:54:00:c6:64:b7}
	I0505 21:24:47.088091   35690 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined IP address 192.168.39.29 and MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:24:47.088321   35690 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHPort
	I0505 21:24:47.088469   35690 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHKeyPath
	I0505 21:24:47.088609   35690 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHUsername
	I0505 21:24:47.088775   35690 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m03/id_rsa Username:docker}
	I0505 21:24:47.172355   35690 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 21:24:47.198430   35690 kubeconfig.go:125] found "ha-322980" server: "https://192.168.39.254:8443"
	I0505 21:24:47.198464   35690 api_server.go:166] Checking apiserver status ...
	I0505 21:24:47.198507   35690 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 21:24:47.214732   35690 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1549/cgroup
	W0505 21:24:47.228340   35690 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1549/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0505 21:24:47.228399   35690 ssh_runner.go:195] Run: ls
	I0505 21:24:47.233788   35690 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0505 21:24:47.238681   35690 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0505 21:24:47.238711   35690 status.go:422] ha-322980-m03 apiserver status = Running (err=<nil>)
	I0505 21:24:47.238723   35690 status.go:257] ha-322980-m03 status: &{Name:ha-322980-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0505 21:24:47.238739   35690 status.go:255] checking status of ha-322980-m04 ...
	I0505 21:24:47.239111   35690 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:47.239154   35690 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:47.255312   35690 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43623
	I0505 21:24:47.255725   35690 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:47.256233   35690 main.go:141] libmachine: Using API Version  1
	I0505 21:24:47.256255   35690 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:47.256614   35690 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:47.256801   35690 main.go:141] libmachine: (ha-322980-m04) Calling .GetState
	I0505 21:24:47.258381   35690 status.go:330] ha-322980-m04 host status = "Running" (err=<nil>)
	I0505 21:24:47.258397   35690 host.go:66] Checking if "ha-322980-m04" exists ...
	I0505 21:24:47.258665   35690 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:47.258696   35690 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:47.272746   35690 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40547
	I0505 21:24:47.273134   35690 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:47.273624   35690 main.go:141] libmachine: Using API Version  1
	I0505 21:24:47.273647   35690 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:47.273918   35690 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:47.274099   35690 main.go:141] libmachine: (ha-322980-m04) Calling .GetIP
	I0505 21:24:47.276684   35690 main.go:141] libmachine: (ha-322980-m04) DBG | domain ha-322980-m04 has defined MAC address 52:54:00:dd:17:2b in network mk-ha-322980
	I0505 21:24:47.277058   35690 main.go:141] libmachine: (ha-322980-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:17:2b", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:20:13 +0000 UTC Type:0 Mac:52:54:00:dd:17:2b Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-322980-m04 Clientid:01:52:54:00:dd:17:2b}
	I0505 21:24:47.277095   35690 main.go:141] libmachine: (ha-322980-m04) DBG | domain ha-322980-m04 has defined IP address 192.168.39.169 and MAC address 52:54:00:dd:17:2b in network mk-ha-322980
	I0505 21:24:47.277240   35690 host.go:66] Checking if "ha-322980-m04" exists ...
	I0505 21:24:47.277555   35690 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:47.277597   35690 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:47.291599   35690 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44865
	I0505 21:24:47.292140   35690 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:47.292649   35690 main.go:141] libmachine: Using API Version  1
	I0505 21:24:47.292671   35690 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:47.292996   35690 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:47.293210   35690 main.go:141] libmachine: (ha-322980-m04) Calling .DriverName
	I0505 21:24:47.293419   35690 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0505 21:24:47.293439   35690 main.go:141] libmachine: (ha-322980-m04) Calling .GetSSHHostname
	I0505 21:24:47.296260   35690 main.go:141] libmachine: (ha-322980-m04) DBG | domain ha-322980-m04 has defined MAC address 52:54:00:dd:17:2b in network mk-ha-322980
	I0505 21:24:47.296765   35690 main.go:141] libmachine: (ha-322980-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:17:2b", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:20:13 +0000 UTC Type:0 Mac:52:54:00:dd:17:2b Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-322980-m04 Clientid:01:52:54:00:dd:17:2b}
	I0505 21:24:47.296797   35690 main.go:141] libmachine: (ha-322980-m04) DBG | domain ha-322980-m04 has defined IP address 192.168.39.169 and MAC address 52:54:00:dd:17:2b in network mk-ha-322980
	I0505 21:24:47.296946   35690 main.go:141] libmachine: (ha-322980-m04) Calling .GetSSHPort
	I0505 21:24:47.297131   35690 main.go:141] libmachine: (ha-322980-m04) Calling .GetSSHKeyPath
	I0505 21:24:47.297277   35690 main.go:141] libmachine: (ha-322980-m04) Calling .GetSSHUsername
	I0505 21:24:47.297429   35690 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m04/id_rsa Username:docker}
	I0505 21:24:47.383389   35690 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 21:24:47.399657   35690 status.go:257] ha-322980-m04 status: &{Name:ha-322980-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-322980 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-322980 -n ha-322980
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-322980 logs -n 25: (1.588567341s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-322980 ssh -n                                                                 | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-322980 cp ha-322980-m03:/home/docker/cp-test.txt                              | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980:/home/docker/cp-test_ha-322980-m03_ha-322980.txt                       |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n                                                                 | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n ha-322980 sudo cat                                              | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | /home/docker/cp-test_ha-322980-m03_ha-322980.txt                                 |           |         |         |                     |                     |
	| cp      | ha-322980 cp ha-322980-m03:/home/docker/cp-test.txt                              | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m02:/home/docker/cp-test_ha-322980-m03_ha-322980-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n                                                                 | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n ha-322980-m02 sudo cat                                          | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | /home/docker/cp-test_ha-322980-m03_ha-322980-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-322980 cp ha-322980-m03:/home/docker/cp-test.txt                              | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m04:/home/docker/cp-test_ha-322980-m03_ha-322980-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n                                                                 | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n ha-322980-m04 sudo cat                                          | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | /home/docker/cp-test_ha-322980-m03_ha-322980-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-322980 cp testdata/cp-test.txt                                                | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n                                                                 | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-322980 cp ha-322980-m04:/home/docker/cp-test.txt                              | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3379972730/001/cp-test_ha-322980-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n                                                                 | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-322980 cp ha-322980-m04:/home/docker/cp-test.txt                              | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980:/home/docker/cp-test_ha-322980-m04_ha-322980.txt                       |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n                                                                 | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n ha-322980 sudo cat                                              | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | /home/docker/cp-test_ha-322980-m04_ha-322980.txt                                 |           |         |         |                     |                     |
	| cp      | ha-322980 cp ha-322980-m04:/home/docker/cp-test.txt                              | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m02:/home/docker/cp-test_ha-322980-m04_ha-322980-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n                                                                 | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n ha-322980-m02 sudo cat                                          | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | /home/docker/cp-test_ha-322980-m04_ha-322980-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-322980 cp ha-322980-m04:/home/docker/cp-test.txt                              | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m03:/home/docker/cp-test_ha-322980-m04_ha-322980-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n                                                                 | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n ha-322980-m03 sudo cat                                          | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | /home/docker/cp-test_ha-322980-m04_ha-322980-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-322980 node stop m02 -v=7                                                     | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-322980 node start m02 -v=7                                                    | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:23 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/05 21:15:28
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0505 21:15:28.192694   29367 out.go:291] Setting OutFile to fd 1 ...
	I0505 21:15:28.192822   29367 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 21:15:28.192834   29367 out.go:304] Setting ErrFile to fd 2...
	I0505 21:15:28.192839   29367 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 21:15:28.193040   29367 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18602-11466/.minikube/bin
	I0505 21:15:28.193594   29367 out.go:298] Setting JSON to false
	I0505 21:15:28.194511   29367 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3475,"bootTime":1714940253,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0505 21:15:28.194576   29367 start.go:139] virtualization: kvm guest
	I0505 21:15:28.196753   29367 out.go:177] * [ha-322980] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0505 21:15:28.198175   29367 notify.go:220] Checking for updates...
	I0505 21:15:28.198200   29367 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 21:15:28.199714   29367 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 21:15:28.201298   29367 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18602-11466/kubeconfig
	I0505 21:15:28.202627   29367 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18602-11466/.minikube
	I0505 21:15:28.204102   29367 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0505 21:15:28.205596   29367 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 21:15:28.206976   29367 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 21:15:28.240336   29367 out.go:177] * Using the kvm2 driver based on user configuration
	I0505 21:15:28.241665   29367 start.go:297] selected driver: kvm2
	I0505 21:15:28.241678   29367 start.go:901] validating driver "kvm2" against <nil>
	I0505 21:15:28.241688   29367 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 21:15:28.242280   29367 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 21:15:28.242338   29367 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18602-11466/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0505 21:15:28.256278   29367 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0505 21:15:28.256351   29367 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0505 21:15:28.256556   29367 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0505 21:15:28.256600   29367 cni.go:84] Creating CNI manager for ""
	I0505 21:15:28.256611   29367 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0505 21:15:28.256617   29367 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0505 21:15:28.256669   29367 start.go:340] cluster config:
	{Name:ha-322980 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-322980 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0505 21:15:28.256755   29367 iso.go:125] acquiring lock: {Name:mk05a37c5afd8d748706c016c1665e3846f7161e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 21:15:28.259217   29367 out.go:177] * Starting "ha-322980" primary control-plane node in "ha-322980" cluster
	I0505 21:15:28.260551   29367 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0505 21:15:28.260586   29367 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18602-11466/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0505 21:15:28.260596   29367 cache.go:56] Caching tarball of preloaded images
	I0505 21:15:28.260684   29367 preload.go:173] Found /home/jenkins/minikube-integration/18602-11466/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0505 21:15:28.260695   29367 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0505 21:15:28.260971   29367 profile.go:143] Saving config to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/config.json ...
	I0505 21:15:28.260991   29367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/config.json: {Name:mkcd41b605e73b5e716932d5592f48027cf09c11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 21:15:28.261114   29367 start.go:360] acquireMachinesLock for ha-322980: {Name:mk7f653ea73351d572a4896c6b37d2e7aa44b4ac Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 21:15:28.261142   29367 start.go:364] duration metric: took 14.244µs to acquireMachinesLock for "ha-322980"
	I0505 21:15:28.261158   29367 start.go:93] Provisioning new machine with config: &{Name:ha-322980 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:ha-322980 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0505 21:15:28.261248   29367 start.go:125] createHost starting for "" (driver="kvm2")
	I0505 21:15:28.263067   29367 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0505 21:15:28.263187   29367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:15:28.263229   29367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:15:28.277004   29367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44969
	I0505 21:15:28.277389   29367 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:15:28.278009   29367 main.go:141] libmachine: Using API Version  1
	I0505 21:15:28.278028   29367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:15:28.278337   29367 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:15:28.278503   29367 main.go:141] libmachine: (ha-322980) Calling .GetMachineName
	I0505 21:15:28.278611   29367 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:15:28.278763   29367 start.go:159] libmachine.API.Create for "ha-322980" (driver="kvm2")
	I0505 21:15:28.278784   29367 client.go:168] LocalClient.Create starting
	I0505 21:15:28.278807   29367 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem
	I0505 21:15:28.278833   29367 main.go:141] libmachine: Decoding PEM data...
	I0505 21:15:28.278847   29367 main.go:141] libmachine: Parsing certificate...
	I0505 21:15:28.278893   29367 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem
	I0505 21:15:28.278918   29367 main.go:141] libmachine: Decoding PEM data...
	I0505 21:15:28.278931   29367 main.go:141] libmachine: Parsing certificate...
	I0505 21:15:28.278947   29367 main.go:141] libmachine: Running pre-create checks...
	I0505 21:15:28.278955   29367 main.go:141] libmachine: (ha-322980) Calling .PreCreateCheck
	I0505 21:15:28.279269   29367 main.go:141] libmachine: (ha-322980) Calling .GetConfigRaw
	I0505 21:15:28.279626   29367 main.go:141] libmachine: Creating machine...
	I0505 21:15:28.279639   29367 main.go:141] libmachine: (ha-322980) Calling .Create
	I0505 21:15:28.279750   29367 main.go:141] libmachine: (ha-322980) Creating KVM machine...
	I0505 21:15:28.280835   29367 main.go:141] libmachine: (ha-322980) DBG | found existing default KVM network
	I0505 21:15:28.281458   29367 main.go:141] libmachine: (ha-322980) DBG | I0505 21:15:28.281306   29390 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ac0}
	I0505 21:15:28.281491   29367 main.go:141] libmachine: (ha-322980) DBG | created network xml: 
	I0505 21:15:28.281504   29367 main.go:141] libmachine: (ha-322980) DBG | <network>
	I0505 21:15:28.281520   29367 main.go:141] libmachine: (ha-322980) DBG |   <name>mk-ha-322980</name>
	I0505 21:15:28.281526   29367 main.go:141] libmachine: (ha-322980) DBG |   <dns enable='no'/>
	I0505 21:15:28.281530   29367 main.go:141] libmachine: (ha-322980) DBG |   
	I0505 21:15:28.281539   29367 main.go:141] libmachine: (ha-322980) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0505 21:15:28.281545   29367 main.go:141] libmachine: (ha-322980) DBG |     <dhcp>
	I0505 21:15:28.281552   29367 main.go:141] libmachine: (ha-322980) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0505 21:15:28.281559   29367 main.go:141] libmachine: (ha-322980) DBG |     </dhcp>
	I0505 21:15:28.281564   29367 main.go:141] libmachine: (ha-322980) DBG |   </ip>
	I0505 21:15:28.281569   29367 main.go:141] libmachine: (ha-322980) DBG |   
	I0505 21:15:28.281574   29367 main.go:141] libmachine: (ha-322980) DBG | </network>
	I0505 21:15:28.281581   29367 main.go:141] libmachine: (ha-322980) DBG | 
	I0505 21:15:28.286231   29367 main.go:141] libmachine: (ha-322980) DBG | trying to create private KVM network mk-ha-322980 192.168.39.0/24...
	I0505 21:15:28.349262   29367 main.go:141] libmachine: (ha-322980) DBG | private KVM network mk-ha-322980 192.168.39.0/24 created
	I0505 21:15:28.349288   29367 main.go:141] libmachine: (ha-322980) DBG | I0505 21:15:28.349223   29390 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18602-11466/.minikube
	I0505 21:15:28.349301   29367 main.go:141] libmachine: (ha-322980) Setting up store path in /home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980 ...
	I0505 21:15:28.349318   29367 main.go:141] libmachine: (ha-322980) Building disk image from file:///home/jenkins/minikube-integration/18602-11466/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso
	I0505 21:15:28.349344   29367 main.go:141] libmachine: (ha-322980) Downloading /home/jenkins/minikube-integration/18602-11466/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18602-11466/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso...
	I0505 21:15:28.575989   29367 main.go:141] libmachine: (ha-322980) DBG | I0505 21:15:28.575855   29390 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa...
	I0505 21:15:28.638991   29367 main.go:141] libmachine: (ha-322980) DBG | I0505 21:15:28.638848   29390 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/ha-322980.rawdisk...
	I0505 21:15:28.639022   29367 main.go:141] libmachine: (ha-322980) DBG | Writing magic tar header
	I0505 21:15:28.639075   29367 main.go:141] libmachine: (ha-322980) DBG | Writing SSH key tar header
	I0505 21:15:28.639113   29367 main.go:141] libmachine: (ha-322980) Setting executable bit set on /home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980 (perms=drwx------)
	I0505 21:15:28.639131   29367 main.go:141] libmachine: (ha-322980) DBG | I0505 21:15:28.638957   29390 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980 ...
	I0505 21:15:28.639141   29367 main.go:141] libmachine: (ha-322980) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980
	I0505 21:15:28.639148   29367 main.go:141] libmachine: (ha-322980) Setting executable bit set on /home/jenkins/minikube-integration/18602-11466/.minikube/machines (perms=drwxr-xr-x)
	I0505 21:15:28.639158   29367 main.go:141] libmachine: (ha-322980) Setting executable bit set on /home/jenkins/minikube-integration/18602-11466/.minikube (perms=drwxr-xr-x)
	I0505 21:15:28.639166   29367 main.go:141] libmachine: (ha-322980) Setting executable bit set on /home/jenkins/minikube-integration/18602-11466 (perms=drwxrwxr-x)
	I0505 21:15:28.639180   29367 main.go:141] libmachine: (ha-322980) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0505 21:15:28.639194   29367 main.go:141] libmachine: (ha-322980) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0505 21:15:28.639208   29367 main.go:141] libmachine: (ha-322980) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18602-11466/.minikube/machines
	I0505 21:15:28.639221   29367 main.go:141] libmachine: (ha-322980) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18602-11466/.minikube
	I0505 21:15:28.639230   29367 main.go:141] libmachine: (ha-322980) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18602-11466
	I0505 21:15:28.639235   29367 main.go:141] libmachine: (ha-322980) Creating domain...
	I0505 21:15:28.639247   29367 main.go:141] libmachine: (ha-322980) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0505 21:15:28.639254   29367 main.go:141] libmachine: (ha-322980) DBG | Checking permissions on dir: /home/jenkins
	I0505 21:15:28.639260   29367 main.go:141] libmachine: (ha-322980) DBG | Checking permissions on dir: /home
	I0505 21:15:28.639265   29367 main.go:141] libmachine: (ha-322980) DBG | Skipping /home - not owner
	I0505 21:15:28.640341   29367 main.go:141] libmachine: (ha-322980) define libvirt domain using xml: 
	I0505 21:15:28.640365   29367 main.go:141] libmachine: (ha-322980) <domain type='kvm'>
	I0505 21:15:28.640396   29367 main.go:141] libmachine: (ha-322980)   <name>ha-322980</name>
	I0505 21:15:28.640419   29367 main.go:141] libmachine: (ha-322980)   <memory unit='MiB'>2200</memory>
	I0505 21:15:28.640435   29367 main.go:141] libmachine: (ha-322980)   <vcpu>2</vcpu>
	I0505 21:15:28.640447   29367 main.go:141] libmachine: (ha-322980)   <features>
	I0505 21:15:28.640460   29367 main.go:141] libmachine: (ha-322980)     <acpi/>
	I0505 21:15:28.640472   29367 main.go:141] libmachine: (ha-322980)     <apic/>
	I0505 21:15:28.640483   29367 main.go:141] libmachine: (ha-322980)     <pae/>
	I0505 21:15:28.640502   29367 main.go:141] libmachine: (ha-322980)     
	I0505 21:15:28.640515   29367 main.go:141] libmachine: (ha-322980)   </features>
	I0505 21:15:28.640525   29367 main.go:141] libmachine: (ha-322980)   <cpu mode='host-passthrough'>
	I0505 21:15:28.640538   29367 main.go:141] libmachine: (ha-322980)   
	I0505 21:15:28.640550   29367 main.go:141] libmachine: (ha-322980)   </cpu>
	I0505 21:15:28.640590   29367 main.go:141] libmachine: (ha-322980)   <os>
	I0505 21:15:28.640634   29367 main.go:141] libmachine: (ha-322980)     <type>hvm</type>
	I0505 21:15:28.640650   29367 main.go:141] libmachine: (ha-322980)     <boot dev='cdrom'/>
	I0505 21:15:28.640722   29367 main.go:141] libmachine: (ha-322980)     <boot dev='hd'/>
	I0505 21:15:28.640747   29367 main.go:141] libmachine: (ha-322980)     <bootmenu enable='no'/>
	I0505 21:15:28.640770   29367 main.go:141] libmachine: (ha-322980)   </os>
	I0505 21:15:28.640791   29367 main.go:141] libmachine: (ha-322980)   <devices>
	I0505 21:15:28.640803   29367 main.go:141] libmachine: (ha-322980)     <disk type='file' device='cdrom'>
	I0505 21:15:28.640811   29367 main.go:141] libmachine: (ha-322980)       <source file='/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/boot2docker.iso'/>
	I0505 21:15:28.640821   29367 main.go:141] libmachine: (ha-322980)       <target dev='hdc' bus='scsi'/>
	I0505 21:15:28.640837   29367 main.go:141] libmachine: (ha-322980)       <readonly/>
	I0505 21:15:28.640848   29367 main.go:141] libmachine: (ha-322980)     </disk>
	I0505 21:15:28.640857   29367 main.go:141] libmachine: (ha-322980)     <disk type='file' device='disk'>
	I0505 21:15:28.640872   29367 main.go:141] libmachine: (ha-322980)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0505 21:15:28.640899   29367 main.go:141] libmachine: (ha-322980)       <source file='/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/ha-322980.rawdisk'/>
	I0505 21:15:28.640923   29367 main.go:141] libmachine: (ha-322980)       <target dev='hda' bus='virtio'/>
	I0505 21:15:28.640936   29367 main.go:141] libmachine: (ha-322980)     </disk>
	I0505 21:15:28.640946   29367 main.go:141] libmachine: (ha-322980)     <interface type='network'>
	I0505 21:15:28.640961   29367 main.go:141] libmachine: (ha-322980)       <source network='mk-ha-322980'/>
	I0505 21:15:28.640973   29367 main.go:141] libmachine: (ha-322980)       <model type='virtio'/>
	I0505 21:15:28.640985   29367 main.go:141] libmachine: (ha-322980)     </interface>
	I0505 21:15:28.641002   29367 main.go:141] libmachine: (ha-322980)     <interface type='network'>
	I0505 21:15:28.641017   29367 main.go:141] libmachine: (ha-322980)       <source network='default'/>
	I0505 21:15:28.641027   29367 main.go:141] libmachine: (ha-322980)       <model type='virtio'/>
	I0505 21:15:28.641037   29367 main.go:141] libmachine: (ha-322980)     </interface>
	I0505 21:15:28.641049   29367 main.go:141] libmachine: (ha-322980)     <serial type='pty'>
	I0505 21:15:28.641069   29367 main.go:141] libmachine: (ha-322980)       <target port='0'/>
	I0505 21:15:28.641077   29367 main.go:141] libmachine: (ha-322980)     </serial>
	I0505 21:15:28.641083   29367 main.go:141] libmachine: (ha-322980)     <console type='pty'>
	I0505 21:15:28.641090   29367 main.go:141] libmachine: (ha-322980)       <target type='serial' port='0'/>
	I0505 21:15:28.641097   29367 main.go:141] libmachine: (ha-322980)     </console>
	I0505 21:15:28.641103   29367 main.go:141] libmachine: (ha-322980)     <rng model='virtio'>
	I0505 21:15:28.641109   29367 main.go:141] libmachine: (ha-322980)       <backend model='random'>/dev/random</backend>
	I0505 21:15:28.641116   29367 main.go:141] libmachine: (ha-322980)     </rng>
	I0505 21:15:28.641121   29367 main.go:141] libmachine: (ha-322980)     
	I0505 21:15:28.641130   29367 main.go:141] libmachine: (ha-322980)     
	I0505 21:15:28.641138   29367 main.go:141] libmachine: (ha-322980)   </devices>
	I0505 21:15:28.641142   29367 main.go:141] libmachine: (ha-322980) </domain>
	I0505 21:15:28.641166   29367 main.go:141] libmachine: (ha-322980) 
	I0505 21:15:28.645282   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:1e:18:46 in network default
	I0505 21:15:28.645839   29367 main.go:141] libmachine: (ha-322980) Ensuring networks are active...
	I0505 21:15:28.645853   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:28.646494   29367 main.go:141] libmachine: (ha-322980) Ensuring network default is active
	I0505 21:15:28.646824   29367 main.go:141] libmachine: (ha-322980) Ensuring network mk-ha-322980 is active
	I0505 21:15:28.647503   29367 main.go:141] libmachine: (ha-322980) Getting domain xml...
	I0505 21:15:28.648454   29367 main.go:141] libmachine: (ha-322980) Creating domain...
	I0505 21:15:29.809417   29367 main.go:141] libmachine: (ha-322980) Waiting to get IP...
	I0505 21:15:29.810285   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:29.810703   29367 main.go:141] libmachine: (ha-322980) DBG | unable to find current IP address of domain ha-322980 in network mk-ha-322980
	I0505 21:15:29.810752   29367 main.go:141] libmachine: (ha-322980) DBG | I0505 21:15:29.810700   29390 retry.go:31] will retry after 224.872521ms: waiting for machine to come up
	I0505 21:15:30.037302   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:30.037791   29367 main.go:141] libmachine: (ha-322980) DBG | unable to find current IP address of domain ha-322980 in network mk-ha-322980
	I0505 21:15:30.037814   29367 main.go:141] libmachine: (ha-322980) DBG | I0505 21:15:30.037752   29390 retry.go:31] will retry after 295.377047ms: waiting for machine to come up
	I0505 21:15:30.335326   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:30.335810   29367 main.go:141] libmachine: (ha-322980) DBG | unable to find current IP address of domain ha-322980 in network mk-ha-322980
	I0505 21:15:30.335840   29367 main.go:141] libmachine: (ha-322980) DBG | I0505 21:15:30.335751   29390 retry.go:31] will retry after 344.396951ms: waiting for machine to come up
	I0505 21:15:30.682167   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:30.682556   29367 main.go:141] libmachine: (ha-322980) DBG | unable to find current IP address of domain ha-322980 in network mk-ha-322980
	I0505 21:15:30.682601   29367 main.go:141] libmachine: (ha-322980) DBG | I0505 21:15:30.682539   29390 retry.go:31] will retry after 436.748422ms: waiting for machine to come up
	I0505 21:15:31.121290   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:31.121701   29367 main.go:141] libmachine: (ha-322980) DBG | unable to find current IP address of domain ha-322980 in network mk-ha-322980
	I0505 21:15:31.121730   29367 main.go:141] libmachine: (ha-322980) DBG | I0505 21:15:31.121670   29390 retry.go:31] will retry after 732.144029ms: waiting for machine to come up
	I0505 21:15:31.855412   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:31.855798   29367 main.go:141] libmachine: (ha-322980) DBG | unable to find current IP address of domain ha-322980 in network mk-ha-322980
	I0505 21:15:31.855827   29367 main.go:141] libmachine: (ha-322980) DBG | I0505 21:15:31.855742   29390 retry.go:31] will retry after 897.748028ms: waiting for machine to come up
	I0505 21:15:32.754714   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:32.755252   29367 main.go:141] libmachine: (ha-322980) DBG | unable to find current IP address of domain ha-322980 in network mk-ha-322980
	I0505 21:15:32.755296   29367 main.go:141] libmachine: (ha-322980) DBG | I0505 21:15:32.755209   29390 retry.go:31] will retry after 944.202996ms: waiting for machine to come up
	I0505 21:15:33.701028   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:33.701492   29367 main.go:141] libmachine: (ha-322980) DBG | unable to find current IP address of domain ha-322980 in network mk-ha-322980
	I0505 21:15:33.701524   29367 main.go:141] libmachine: (ha-322980) DBG | I0505 21:15:33.701454   29390 retry.go:31] will retry after 926.520724ms: waiting for machine to come up
	I0505 21:15:34.629504   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:34.629929   29367 main.go:141] libmachine: (ha-322980) DBG | unable to find current IP address of domain ha-322980 in network mk-ha-322980
	I0505 21:15:34.629958   29367 main.go:141] libmachine: (ha-322980) DBG | I0505 21:15:34.629897   29390 retry.go:31] will retry after 1.386455445s: waiting for machine to come up
	I0505 21:15:36.018319   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:36.018716   29367 main.go:141] libmachine: (ha-322980) DBG | unable to find current IP address of domain ha-322980 in network mk-ha-322980
	I0505 21:15:36.018744   29367 main.go:141] libmachine: (ha-322980) DBG | I0505 21:15:36.018672   29390 retry.go:31] will retry after 1.708193894s: waiting for machine to come up
	I0505 21:15:37.728811   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:37.729339   29367 main.go:141] libmachine: (ha-322980) DBG | unable to find current IP address of domain ha-322980 in network mk-ha-322980
	I0505 21:15:37.729369   29367 main.go:141] libmachine: (ha-322980) DBG | I0505 21:15:37.729277   29390 retry.go:31] will retry after 2.129933651s: waiting for machine to come up
	I0505 21:15:39.861508   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:39.861977   29367 main.go:141] libmachine: (ha-322980) DBG | unable to find current IP address of domain ha-322980 in network mk-ha-322980
	I0505 21:15:39.862013   29367 main.go:141] libmachine: (ha-322980) DBG | I0505 21:15:39.861925   29390 retry.go:31] will retry after 3.149022906s: waiting for machine to come up
	I0505 21:15:43.014261   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:43.014694   29367 main.go:141] libmachine: (ha-322980) DBG | unable to find current IP address of domain ha-322980 in network mk-ha-322980
	I0505 21:15:43.014726   29367 main.go:141] libmachine: (ha-322980) DBG | I0505 21:15:43.014669   29390 retry.go:31] will retry after 3.501000441s: waiting for machine to come up
	I0505 21:15:46.520000   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:46.520497   29367 main.go:141] libmachine: (ha-322980) DBG | unable to find current IP address of domain ha-322980 in network mk-ha-322980
	I0505 21:15:46.520523   29367 main.go:141] libmachine: (ha-322980) DBG | I0505 21:15:46.520460   29390 retry.go:31] will retry after 5.233613527s: waiting for machine to come up
	I0505 21:15:51.757587   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:51.758063   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has current primary IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:51.758085   29367 main.go:141] libmachine: (ha-322980) Found IP for machine: 192.168.39.178
	I0505 21:15:51.758095   29367 main.go:141] libmachine: (ha-322980) Reserving static IP address...
	I0505 21:15:51.758503   29367 main.go:141] libmachine: (ha-322980) DBG | unable to find host DHCP lease matching {name: "ha-322980", mac: "52:54:00:b4:13:35", ip: "192.168.39.178"} in network mk-ha-322980
	I0505 21:15:51.828261   29367 main.go:141] libmachine: (ha-322980) Reserved static IP address: 192.168.39.178
	I0505 21:15:51.828288   29367 main.go:141] libmachine: (ha-322980) Waiting for SSH to be available...
	I0505 21:15:51.828298   29367 main.go:141] libmachine: (ha-322980) DBG | Getting to WaitForSSH function...
	I0505 21:15:51.830888   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:51.831206   29367 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b4:13:35}
	I0505 21:15:51.831227   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:51.831458   29367 main.go:141] libmachine: (ha-322980) DBG | Using SSH client type: external
	I0505 21:15:51.831499   29367 main.go:141] libmachine: (ha-322980) DBG | Using SSH private key: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa (-rw-------)
	I0505 21:15:51.831531   29367 main.go:141] libmachine: (ha-322980) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.178 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0505 21:15:51.831545   29367 main.go:141] libmachine: (ha-322980) DBG | About to run SSH command:
	I0505 21:15:51.831557   29367 main.go:141] libmachine: (ha-322980) DBG | exit 0
	I0505 21:15:51.963706   29367 main.go:141] libmachine: (ha-322980) DBG | SSH cmd err, output: <nil>: 
	I0505 21:15:51.963939   29367 main.go:141] libmachine: (ha-322980) KVM machine creation complete!
	I0505 21:15:51.964298   29367 main.go:141] libmachine: (ha-322980) Calling .GetConfigRaw
	I0505 21:15:51.964922   29367 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:15:51.965126   29367 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:15:51.965287   29367 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0505 21:15:51.965302   29367 main.go:141] libmachine: (ha-322980) Calling .GetState
	I0505 21:15:51.966422   29367 main.go:141] libmachine: Detecting operating system of created instance...
	I0505 21:15:51.966438   29367 main.go:141] libmachine: Waiting for SSH to be available...
	I0505 21:15:51.966446   29367 main.go:141] libmachine: Getting to WaitForSSH function...
	I0505 21:15:51.966454   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:15:51.968657   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:51.968955   29367 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:15:51.969006   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:51.969066   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:15:51.969215   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:15:51.969330   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:15:51.969494   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:15:51.969595   29367 main.go:141] libmachine: Using SSH client type: native
	I0505 21:15:51.969765   29367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0505 21:15:51.969776   29367 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0505 21:15:52.079133   29367 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0505 21:15:52.079164   29367 main.go:141] libmachine: Detecting the provisioner...
	I0505 21:15:52.079172   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:15:52.081815   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:52.082187   29367 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:15:52.082216   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:52.082460   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:15:52.082660   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:15:52.082896   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:15:52.083061   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:15:52.083231   29367 main.go:141] libmachine: Using SSH client type: native
	I0505 21:15:52.083444   29367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0505 21:15:52.083458   29367 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0505 21:15:52.192292   29367 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0505 21:15:52.192350   29367 main.go:141] libmachine: found compatible host: buildroot
	I0505 21:15:52.192359   29367 main.go:141] libmachine: Provisioning with buildroot...
	I0505 21:15:52.192370   29367 main.go:141] libmachine: (ha-322980) Calling .GetMachineName
	I0505 21:15:52.192643   29367 buildroot.go:166] provisioning hostname "ha-322980"
	I0505 21:15:52.192662   29367 main.go:141] libmachine: (ha-322980) Calling .GetMachineName
	I0505 21:15:52.192841   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:15:52.195494   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:52.195879   29367 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:15:52.195898   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:52.196101   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:15:52.196276   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:15:52.196417   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:15:52.196534   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:15:52.196696   29367 main.go:141] libmachine: Using SSH client type: native
	I0505 21:15:52.196858   29367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0505 21:15:52.196868   29367 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-322980 && echo "ha-322980" | sudo tee /etc/hostname
	I0505 21:15:52.319248   29367 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-322980
	
	I0505 21:15:52.319297   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:15:52.321946   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:52.322311   29367 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:15:52.322338   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:52.322499   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:15:52.322732   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:15:52.322864   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:15:52.323023   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:15:52.323163   29367 main.go:141] libmachine: Using SSH client type: native
	I0505 21:15:52.323366   29367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0505 21:15:52.323392   29367 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-322980' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-322980/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-322980' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0505 21:15:52.441696   29367 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0505 21:15:52.441734   29367 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18602-11466/.minikube CaCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18602-11466/.minikube}
	I0505 21:15:52.441772   29367 buildroot.go:174] setting up certificates
	I0505 21:15:52.441783   29367 provision.go:84] configureAuth start
	I0505 21:15:52.441792   29367 main.go:141] libmachine: (ha-322980) Calling .GetMachineName
	I0505 21:15:52.442117   29367 main.go:141] libmachine: (ha-322980) Calling .GetIP
	I0505 21:15:52.444978   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:52.445360   29367 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:15:52.445391   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:52.445545   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:15:52.447772   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:52.448155   29367 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:15:52.448193   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:52.448203   29367 provision.go:143] copyHostCerts
	I0505 21:15:52.448245   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem
	I0505 21:15:52.448275   29367 exec_runner.go:144] found /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem, removing ...
	I0505 21:15:52.448284   29367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem
	I0505 21:15:52.448352   29367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem (1078 bytes)
	I0505 21:15:52.448435   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem
	I0505 21:15:52.448454   29367 exec_runner.go:144] found /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem, removing ...
	I0505 21:15:52.448462   29367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem
	I0505 21:15:52.448504   29367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem (1123 bytes)
	I0505 21:15:52.448562   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem
	I0505 21:15:52.448582   29367 exec_runner.go:144] found /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem, removing ...
	I0505 21:15:52.448589   29367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem
	I0505 21:15:52.448620   29367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem (1675 bytes)
	I0505 21:15:52.448701   29367 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem org=jenkins.ha-322980 san=[127.0.0.1 192.168.39.178 ha-322980 localhost minikube]
	I0505 21:15:52.539458   29367 provision.go:177] copyRemoteCerts
	I0505 21:15:52.539531   29367 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0505 21:15:52.539554   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:15:52.542206   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:52.542557   29367 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:15:52.542582   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:52.542752   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:15:52.542925   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:15:52.543062   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:15:52.543179   29367 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa Username:docker}
	I0505 21:15:52.628431   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0505 21:15:52.628506   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0505 21:15:52.655798   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0505 21:15:52.655877   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0505 21:15:52.681175   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0505 21:15:52.681258   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0505 21:15:52.706740   29367 provision.go:87] duration metric: took 264.947145ms to configureAuth
	I0505 21:15:52.706766   29367 buildroot.go:189] setting minikube options for container-runtime
	I0505 21:15:52.706930   29367 config.go:182] Loaded profile config "ha-322980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 21:15:52.706995   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:15:52.709586   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:52.709960   29367 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:15:52.709990   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:52.710162   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:15:52.710322   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:15:52.710478   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:15:52.710570   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:15:52.710696   29367 main.go:141] libmachine: Using SSH client type: native
	I0505 21:15:52.710859   29367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0505 21:15:52.710875   29367 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0505 21:15:53.006304   29367 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0505 21:15:53.006333   29367 main.go:141] libmachine: Checking connection to Docker...
	I0505 21:15:53.006358   29367 main.go:141] libmachine: (ha-322980) Calling .GetURL
	I0505 21:15:53.007738   29367 main.go:141] libmachine: (ha-322980) DBG | Using libvirt version 6000000
	I0505 21:15:53.011167   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:53.011587   29367 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:15:53.011610   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:53.011767   29367 main.go:141] libmachine: Docker is up and running!
	I0505 21:15:53.011809   29367 main.go:141] libmachine: Reticulating splines...
	I0505 21:15:53.011819   29367 client.go:171] duration metric: took 24.733029739s to LocalClient.Create
	I0505 21:15:53.011841   29367 start.go:167] duration metric: took 24.733077709s to libmachine.API.Create "ha-322980"
	I0505 21:15:53.011854   29367 start.go:293] postStartSetup for "ha-322980" (driver="kvm2")
	I0505 21:15:53.011867   29367 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0505 21:15:53.011882   29367 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:15:53.012119   29367 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0505 21:15:53.012143   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:15:53.014385   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:53.014755   29367 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:15:53.014781   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:53.015014   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:15:53.015207   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:15:53.015495   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:15:53.015629   29367 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa Username:docker}
	I0505 21:15:53.099090   29367 ssh_runner.go:195] Run: cat /etc/os-release
	I0505 21:15:53.103691   29367 info.go:137] Remote host: Buildroot 2023.02.9
	I0505 21:15:53.103710   29367 filesync.go:126] Scanning /home/jenkins/minikube-integration/18602-11466/.minikube/addons for local assets ...
	I0505 21:15:53.103760   29367 filesync.go:126] Scanning /home/jenkins/minikube-integration/18602-11466/.minikube/files for local assets ...
	I0505 21:15:53.103845   29367 filesync.go:149] local asset: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem -> 187982.pem in /etc/ssl/certs
	I0505 21:15:53.103856   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem -> /etc/ssl/certs/187982.pem
	I0505 21:15:53.103945   29367 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0505 21:15:53.114809   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem --> /etc/ssl/certs/187982.pem (1708 bytes)
	I0505 21:15:53.139829   29367 start.go:296] duration metric: took 127.963218ms for postStartSetup
	I0505 21:15:53.139873   29367 main.go:141] libmachine: (ha-322980) Calling .GetConfigRaw
	I0505 21:15:53.140452   29367 main.go:141] libmachine: (ha-322980) Calling .GetIP
	I0505 21:15:53.143012   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:53.143276   29367 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:15:53.143294   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:53.143579   29367 profile.go:143] Saving config to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/config.json ...
	I0505 21:15:53.143789   29367 start.go:128] duration metric: took 24.882530508s to createHost
	I0505 21:15:53.143822   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:15:53.146037   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:53.146352   29367 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:15:53.146379   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:53.146527   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:15:53.146704   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:15:53.146847   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:15:53.146984   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:15:53.147126   29367 main.go:141] libmachine: Using SSH client type: native
	I0505 21:15:53.147322   29367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0505 21:15:53.147339   29367 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0505 21:15:53.256861   29367 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714943753.206706515
	
	I0505 21:15:53.256880   29367 fix.go:216] guest clock: 1714943753.206706515
	I0505 21:15:53.256887   29367 fix.go:229] Guest: 2024-05-05 21:15:53.206706515 +0000 UTC Remote: 2024-05-05 21:15:53.14380974 +0000 UTC m=+25.006569318 (delta=62.896775ms)
	I0505 21:15:53.256905   29367 fix.go:200] guest clock delta is within tolerance: 62.896775ms
	I0505 21:15:53.256911   29367 start.go:83] releasing machines lock for "ha-322980", held for 24.995760647s
	I0505 21:15:53.256934   29367 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:15:53.257228   29367 main.go:141] libmachine: (ha-322980) Calling .GetIP
	I0505 21:15:53.259522   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:53.259876   29367 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:15:53.259902   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:53.260008   29367 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:15:53.260428   29367 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:15:53.260593   29367 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:15:53.260708   29367 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0505 21:15:53.260753   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:15:53.260808   29367 ssh_runner.go:195] Run: cat /version.json
	I0505 21:15:53.260841   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:15:53.263354   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:53.263387   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:53.263695   29367 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:15:53.263719   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:53.263744   29367 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:15:53.263759   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:53.263866   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:15:53.264048   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:15:53.264065   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:15:53.264201   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:15:53.264218   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:15:53.264310   29367 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa Username:docker}
	I0505 21:15:53.264387   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:15:53.264498   29367 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa Username:docker}
	I0505 21:15:53.368839   29367 ssh_runner.go:195] Run: systemctl --version
	I0505 21:15:53.375745   29367 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0505 21:15:53.548045   29367 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0505 21:15:53.554925   29367 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0505 21:15:53.554995   29367 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0505 21:15:53.575884   29367 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0505 21:15:53.575902   29367 start.go:494] detecting cgroup driver to use...
	I0505 21:15:53.575948   29367 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0505 21:15:53.595546   29367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0505 21:15:53.610574   29367 docker.go:217] disabling cri-docker service (if available) ...
	I0505 21:15:53.610629   29367 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0505 21:15:53.625764   29367 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0505 21:15:53.640786   29367 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0505 21:15:53.762725   29367 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0505 21:15:53.950332   29367 docker.go:233] disabling docker service ...
	I0505 21:15:53.950389   29367 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0505 21:15:53.966703   29367 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0505 21:15:53.981102   29367 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0505 21:15:54.118651   29367 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0505 21:15:54.236140   29367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0505 21:15:54.251750   29367 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0505 21:15:54.273464   29367 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0505 21:15:54.273533   29367 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:15:54.285094   29367 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0505 21:15:54.285185   29367 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:15:54.297250   29367 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:15:54.308936   29367 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:15:54.323138   29367 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0505 21:15:54.337480   29367 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:15:54.350674   29367 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:15:54.370496   29367 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:15:54.382773   29367 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0505 21:15:54.394261   29367 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0505 21:15:54.394327   29367 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0505 21:15:54.410065   29367 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0505 21:15:54.421371   29367 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 21:15:54.533560   29367 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0505 21:15:54.689822   29367 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0505 21:15:54.689886   29367 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0505 21:15:54.696023   29367 start.go:562] Will wait 60s for crictl version
	I0505 21:15:54.696071   29367 ssh_runner.go:195] Run: which crictl
	I0505 21:15:54.700847   29367 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0505 21:15:54.751750   29367 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0505 21:15:54.751846   29367 ssh_runner.go:195] Run: crio --version
	I0505 21:15:54.786179   29367 ssh_runner.go:195] Run: crio --version
	I0505 21:15:54.823252   29367 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0505 21:15:54.824391   29367 main.go:141] libmachine: (ha-322980) Calling .GetIP
	I0505 21:15:54.827175   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:54.827512   29367 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:15:54.827542   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:15:54.827740   29367 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0505 21:15:54.832212   29367 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0505 21:15:54.847192   29367 kubeadm.go:877] updating cluster {Name:ha-322980 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cl
usterName:ha-322980 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.178 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0505 21:15:54.847291   29367 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0505 21:15:54.847335   29367 ssh_runner.go:195] Run: sudo crictl images --output json
	I0505 21:15:54.882126   29367 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0505 21:15:54.882179   29367 ssh_runner.go:195] Run: which lz4
	I0505 21:15:54.886447   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0505 21:15:54.886534   29367 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0505 21:15:54.891461   29367 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0505 21:15:54.891489   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0505 21:15:56.548982   29367 crio.go:462] duration metric: took 1.662478276s to copy over tarball
	I0505 21:15:56.549054   29367 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0505 21:15:59.170048   29367 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.620958409s)
	I0505 21:15:59.170082   29367 crio.go:469] duration metric: took 2.621068356s to extract the tarball
	I0505 21:15:59.170090   29367 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0505 21:15:59.212973   29367 ssh_runner.go:195] Run: sudo crictl images --output json
	I0505 21:15:59.267250   29367 crio.go:514] all images are preloaded for cri-o runtime.
	I0505 21:15:59.267269   29367 cache_images.go:84] Images are preloaded, skipping loading
	I0505 21:15:59.267276   29367 kubeadm.go:928] updating node { 192.168.39.178 8443 v1.30.0 crio true true} ...
	I0505 21:15:59.267364   29367 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-322980 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.178
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-322980 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0505 21:15:59.267439   29367 ssh_runner.go:195] Run: crio config
	I0505 21:15:59.315965   29367 cni.go:84] Creating CNI manager for ""
	I0505 21:15:59.315986   29367 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0505 21:15:59.315996   29367 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0505 21:15:59.316020   29367 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.178 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-322980 NodeName:ha-322980 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.178"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.178 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0505 21:15:59.316171   29367 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.178
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-322980"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.178
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.178"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0505 21:15:59.316207   29367 kube-vip.go:111] generating kube-vip config ...
	I0505 21:15:59.316259   29367 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0505 21:15:59.342014   29367 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0505 21:15:59.342129   29367 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0505 21:15:59.342205   29367 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0505 21:15:59.354767   29367 binaries.go:44] Found k8s binaries, skipping transfer
	I0505 21:15:59.354825   29367 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0505 21:15:59.367195   29367 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0505 21:15:59.387633   29367 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0505 21:15:59.407122   29367 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0505 21:15:59.426762   29367 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0505 21:15:59.446645   29367 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0505 21:15:59.451385   29367 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0505 21:15:59.466763   29367 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 21:15:59.592147   29367 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0505 21:15:59.611747   29367 certs.go:68] Setting up /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980 for IP: 192.168.39.178
	I0505 21:15:59.611768   29367 certs.go:194] generating shared ca certs ...
	I0505 21:15:59.611781   29367 certs.go:226] acquiring lock for ca certs: {Name:mk9a8f97724855697074b08194c6247f8f9e5c84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 21:15:59.611944   29367 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key
	I0505 21:15:59.611995   29367 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key
	I0505 21:15:59.612009   29367 certs.go:256] generating profile certs ...
	I0505 21:15:59.612081   29367 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/client.key
	I0505 21:15:59.612104   29367 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/client.crt with IP's: []
	I0505 21:15:59.789220   29367 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/client.crt ...
	I0505 21:15:59.789246   29367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/client.crt: {Name:mkb9b4c515630ef7d7577699d1dd0f62181a2e95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 21:15:59.789421   29367 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/client.key ...
	I0505 21:15:59.789434   29367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/client.key: {Name:mk3d64e88d4cf5cb8950198d8016844ad9d51ad4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 21:15:59.789530   29367 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key.77cbfdd1
	I0505 21:15:59.789552   29367 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt.77cbfdd1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.178 192.168.39.254]
	I0505 21:15:59.929903   29367 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt.77cbfdd1 ...
	I0505 21:15:59.929930   29367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt.77cbfdd1: {Name:mk9f7624fdabd39cce044f7ff8479aed79f944ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 21:15:59.930123   29367 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key.77cbfdd1 ...
	I0505 21:15:59.930139   29367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key.77cbfdd1: {Name:mk9061c1eb79654726a0dd80d3f445c84d886d55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 21:15:59.930235   29367 certs.go:381] copying /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt.77cbfdd1 -> /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt
	I0505 21:15:59.930309   29367 certs.go:385] copying /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key.77cbfdd1 -> /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key
	I0505 21:15:59.930361   29367 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.key
	I0505 21:15:59.930375   29367 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.crt with IP's: []
	I0505 21:16:00.114106   29367 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.crt ...
	I0505 21:16:00.114134   29367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.crt: {Name:mkbc3987c5d5fa173c87a9b09d862fa07695ac93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 21:16:00.114314   29367 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.key ...
	I0505 21:16:00.114329   29367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.key: {Name:mk7cdbe77608aed5ce72b4baebcbf84870ae6fa7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 21:16:00.114426   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0505 21:16:00.114445   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0505 21:16:00.114456   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0505 21:16:00.114469   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0505 21:16:00.114481   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0505 21:16:00.114500   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0505 21:16:00.114516   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0505 21:16:00.114533   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0505 21:16:00.114600   29367 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798.pem (1338 bytes)
	W0505 21:16:00.114633   29367 certs.go:480] ignoring /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798_empty.pem, impossibly tiny 0 bytes
	I0505 21:16:00.114646   29367 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem (1675 bytes)
	I0505 21:16:00.114680   29367 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem (1078 bytes)
	I0505 21:16:00.114702   29367 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem (1123 bytes)
	I0505 21:16:00.114722   29367 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem (1675 bytes)
	I0505 21:16:00.114761   29367 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem (1708 bytes)
	I0505 21:16:00.114805   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem -> /usr/share/ca-certificates/187982.pem
	I0505 21:16:00.114828   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0505 21:16:00.114842   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798.pem -> /usr/share/ca-certificates/18798.pem
	I0505 21:16:00.115355   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0505 21:16:00.150022   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0505 21:16:00.181766   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0505 21:16:00.215392   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0505 21:16:00.246046   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0505 21:16:00.276357   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0505 21:16:00.303779   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0505 21:16:00.331749   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0505 21:16:00.357748   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem --> /usr/share/ca-certificates/187982.pem (1708 bytes)
	I0505 21:16:00.387589   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0505 21:16:00.414236   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798.pem --> /usr/share/ca-certificates/18798.pem (1338 bytes)
	I0505 21:16:00.440055   29367 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0505 21:16:00.458944   29367 ssh_runner.go:195] Run: openssl version
	I0505 21:16:00.465242   29367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/187982.pem && ln -fs /usr/share/ca-certificates/187982.pem /etc/ssl/certs/187982.pem"
	I0505 21:16:00.478123   29367 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/187982.pem
	I0505 21:16:00.482993   29367 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  5 21:11 /usr/share/ca-certificates/187982.pem
	I0505 21:16:00.483037   29367 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/187982.pem
	I0505 21:16:00.489225   29367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/187982.pem /etc/ssl/certs/3ec20f2e.0"
	I0505 21:16:00.501929   29367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0505 21:16:00.515529   29367 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0505 21:16:00.520459   29367 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  5 20:58 /usr/share/ca-certificates/minikubeCA.pem
	I0505 21:16:00.520507   29367 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0505 21:16:00.526773   29367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0505 21:16:00.539611   29367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18798.pem && ln -fs /usr/share/ca-certificates/18798.pem /etc/ssl/certs/18798.pem"
	I0505 21:16:00.552758   29367 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18798.pem
	I0505 21:16:00.557535   29367 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  5 21:11 /usr/share/ca-certificates/18798.pem
	I0505 21:16:00.557579   29367 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18798.pem
	I0505 21:16:00.563917   29367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18798.pem /etc/ssl/certs/51391683.0"
	I0505 21:16:00.577907   29367 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0505 21:16:00.582480   29367 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0505 21:16:00.582522   29367 kubeadm.go:391] StartCluster: {Name:ha-322980 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clust
erName:ha-322980 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.178 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 21:16:00.582610   29367 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0505 21:16:00.582676   29367 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0505 21:16:00.624855   29367 cri.go:89] found id: ""
	I0505 21:16:00.624933   29367 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0505 21:16:00.637047   29367 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0505 21:16:00.650968   29367 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0505 21:16:00.663499   29367 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0505 21:16:00.663519   29367 kubeadm.go:156] found existing configuration files:
	
	I0505 21:16:00.663565   29367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0505 21:16:00.675054   29367 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0505 21:16:00.675110   29367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0505 21:16:00.686684   29367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0505 21:16:00.697979   29367 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0505 21:16:00.698033   29367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0505 21:16:00.709267   29367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0505 21:16:00.720257   29367 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0505 21:16:00.720302   29367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0505 21:16:00.731752   29367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0505 21:16:00.742646   29367 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0505 21:16:00.742695   29367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0505 21:16:00.753969   29367 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0505 21:16:00.877747   29367 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0505 21:16:00.877979   29367 kubeadm.go:309] [preflight] Running pre-flight checks
	I0505 21:16:01.027519   29367 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0505 21:16:01.027629   29367 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0505 21:16:01.027768   29367 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0505 21:16:01.253201   29367 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0505 21:16:01.394240   29367 out.go:204]   - Generating certificates and keys ...
	I0505 21:16:01.394379   29367 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0505 21:16:01.394460   29367 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0505 21:16:01.403637   29367 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0505 21:16:01.616128   29367 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0505 21:16:01.992561   29367 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0505 21:16:02.239704   29367 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0505 21:16:02.368329   29367 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0505 21:16:02.368565   29367 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-322980 localhost] and IPs [192.168.39.178 127.0.0.1 ::1]
	I0505 21:16:02.563897   29367 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0505 21:16:02.564112   29367 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-322980 localhost] and IPs [192.168.39.178 127.0.0.1 ::1]
	I0505 21:16:02.730896   29367 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0505 21:16:02.936943   29367 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0505 21:16:03.179224   29367 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0505 21:16:03.179425   29367 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0505 21:16:03.340119   29367 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0505 21:16:03.426263   29367 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0505 21:16:03.564383   29367 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0505 21:16:03.694444   29367 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0505 21:16:03.954715   29367 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0505 21:16:03.955430   29367 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0505 21:16:03.957841   29367 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0505 21:16:03.959513   29367 out.go:204]   - Booting up control plane ...
	I0505 21:16:03.959631   29367 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0505 21:16:03.959742   29367 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0505 21:16:03.960883   29367 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0505 21:16:03.989820   29367 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0505 21:16:03.989937   29367 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0505 21:16:03.989992   29367 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0505 21:16:04.141772   29367 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0505 21:16:04.141912   29367 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0505 21:16:04.643333   29367 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.61592ms
	I0505 21:16:04.643425   29367 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0505 21:16:13.671466   29367 kubeadm.go:309] [api-check] The API server is healthy after 9.027059086s
	I0505 21:16:13.687747   29367 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0505 21:16:13.701785   29367 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0505 21:16:13.732952   29367 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0505 21:16:13.733222   29367 kubeadm.go:309] [mark-control-plane] Marking the node ha-322980 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0505 21:16:13.754735   29367 kubeadm.go:309] [bootstrap-token] Using token: 2zgn2d.a9djy29f23rnuhm1
	I0505 21:16:13.756246   29367 out.go:204]   - Configuring RBAC rules ...
	I0505 21:16:13.756392   29367 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0505 21:16:13.765989   29367 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0505 21:16:13.775726   29367 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0505 21:16:13.782240   29367 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0505 21:16:13.785796   29367 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0505 21:16:13.789688   29367 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0505 21:16:14.080336   29367 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0505 21:16:14.511103   29367 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0505 21:16:15.079442   29367 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0505 21:16:15.080502   29367 kubeadm.go:309] 
	I0505 21:16:15.080583   29367 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0505 21:16:15.080600   29367 kubeadm.go:309] 
	I0505 21:16:15.080671   29367 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0505 21:16:15.080678   29367 kubeadm.go:309] 
	I0505 21:16:15.080723   29367 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0505 21:16:15.080828   29367 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0505 21:16:15.080890   29367 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0505 21:16:15.080906   29367 kubeadm.go:309] 
	I0505 21:16:15.080950   29367 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0505 21:16:15.080956   29367 kubeadm.go:309] 
	I0505 21:16:15.080996   29367 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0505 21:16:15.081004   29367 kubeadm.go:309] 
	I0505 21:16:15.081047   29367 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0505 21:16:15.081153   29367 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0505 21:16:15.081264   29367 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0505 21:16:15.081278   29367 kubeadm.go:309] 
	I0505 21:16:15.081363   29367 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0505 21:16:15.081437   29367 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0505 21:16:15.081444   29367 kubeadm.go:309] 
	I0505 21:16:15.081569   29367 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 2zgn2d.a9djy29f23rnuhm1 \
	I0505 21:16:15.081706   29367 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6a26f18bad1f4f0bdeab00e50a185d34e4c63698b4d623f2ccf5d34207e02541 \
	I0505 21:16:15.081757   29367 kubeadm.go:309] 	--control-plane 
	I0505 21:16:15.081764   29367 kubeadm.go:309] 
	I0505 21:16:15.081874   29367 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0505 21:16:15.081883   29367 kubeadm.go:309] 
	I0505 21:16:15.081965   29367 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 2zgn2d.a9djy29f23rnuhm1 \
	I0505 21:16:15.082059   29367 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:6a26f18bad1f4f0bdeab00e50a185d34e4c63698b4d623f2ccf5d34207e02541 
	I0505 21:16:15.082671   29367 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0505 21:16:15.082725   29367 cni.go:84] Creating CNI manager for ""
	I0505 21:16:15.082738   29367 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0505 21:16:15.084351   29367 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0505 21:16:15.085703   29367 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0505 21:16:15.092212   29367 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0505 21:16:15.092228   29367 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0505 21:16:15.114432   29367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0505 21:16:15.477564   29367 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0505 21:16:15.477659   29367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 21:16:15.477698   29367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-322980 minikube.k8s.io/updated_at=2024_05_05T21_16_15_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=182cbbc99574885c654f8e32902368a71f76ddd3 minikube.k8s.io/name=ha-322980 minikube.k8s.io/primary=true
	I0505 21:16:15.749581   29367 ops.go:34] apiserver oom_adj: -16
	I0505 21:16:15.749706   29367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 21:16:16.249813   29367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 21:16:16.750664   29367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 21:16:17.249922   29367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 21:16:17.750161   29367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 21:16:18.249824   29367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 21:16:18.750723   29367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 21:16:19.250399   29367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 21:16:19.750617   29367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 21:16:20.250156   29367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 21:16:20.749934   29367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 21:16:21.249823   29367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 21:16:21.750563   29367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 21:16:22.250502   29367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 21:16:22.750279   29367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 21:16:23.250613   29367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 21:16:23.749792   29367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 21:16:24.250417   29367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 21:16:24.750496   29367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 21:16:25.249969   29367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 21:16:25.382904   29367 kubeadm.go:1107] duration metric: took 9.90531208s to wait for elevateKubeSystemPrivileges
	W0505 21:16:25.382956   29367 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0505 21:16:25.382966   29367 kubeadm.go:393] duration metric: took 24.800444819s to StartCluster
	I0505 21:16:25.382988   29367 settings.go:142] acquiring lock: {Name:mkbe19b7965e4b0b9928cd2b7b56f51dec95b157 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 21:16:25.383079   29367 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18602-11466/kubeconfig
	I0505 21:16:25.383788   29367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/kubeconfig: {Name:mk083e789ff55e795c8f0eb5d298a0e27ad9cdde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 21:16:25.384008   29367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0505 21:16:25.384035   29367 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0505 21:16:25.384010   29367 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.178 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0505 21:16:25.384160   29367 start.go:240] waiting for startup goroutines ...
	I0505 21:16:25.384130   29367 addons.go:69] Setting default-storageclass=true in profile "ha-322980"
	I0505 21:16:25.384221   29367 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-322980"
	I0505 21:16:25.384259   29367 config.go:182] Loaded profile config "ha-322980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 21:16:25.384130   29367 addons.go:69] Setting storage-provisioner=true in profile "ha-322980"
	I0505 21:16:25.384321   29367 addons.go:234] Setting addon storage-provisioner=true in "ha-322980"
	I0505 21:16:25.384352   29367 host.go:66] Checking if "ha-322980" exists ...
	I0505 21:16:25.384717   29367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:16:25.384768   29367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:16:25.384717   29367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:16:25.384836   29367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:16:25.406853   29367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42739
	I0505 21:16:25.406907   29367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36251
	I0505 21:16:25.407353   29367 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:16:25.407405   29367 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:16:25.407888   29367 main.go:141] libmachine: Using API Version  1
	I0505 21:16:25.407916   29367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:16:25.408040   29367 main.go:141] libmachine: Using API Version  1
	I0505 21:16:25.408065   29367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:16:25.408291   29367 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:16:25.408408   29367 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:16:25.408547   29367 main.go:141] libmachine: (ha-322980) Calling .GetState
	I0505 21:16:25.408875   29367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:16:25.408927   29367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:16:25.410823   29367 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18602-11466/kubeconfig
	I0505 21:16:25.411166   29367 kapi.go:59] client config for ha-322980: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/client.crt", KeyFile:"/home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/client.key", CAFile:"/home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02a40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0505 21:16:25.411799   29367 cert_rotation.go:137] Starting client certificate rotation controller
	I0505 21:16:25.412003   29367 addons.go:234] Setting addon default-storageclass=true in "ha-322980"
	I0505 21:16:25.412046   29367 host.go:66] Checking if "ha-322980" exists ...
	I0505 21:16:25.412446   29367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:16:25.412488   29367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:16:25.424430   29367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45069
	I0505 21:16:25.424871   29367 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:16:25.425369   29367 main.go:141] libmachine: Using API Version  1
	I0505 21:16:25.425393   29367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:16:25.425746   29367 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:16:25.425926   29367 main.go:141] libmachine: (ha-322980) Calling .GetState
	I0505 21:16:25.427410   29367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46201
	I0505 21:16:25.427665   29367 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:16:25.427765   29367 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:16:25.429429   29367 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0505 21:16:25.428148   29367 main.go:141] libmachine: Using API Version  1
	I0505 21:16:25.430670   29367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:16:25.430755   29367 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0505 21:16:25.430776   29367 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0505 21:16:25.430797   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:16:25.431020   29367 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:16:25.431657   29367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:16:25.431699   29367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:16:25.433553   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:16:25.433852   29367 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:16:25.433876   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:16:25.433954   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:16:25.434123   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:16:25.434253   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:16:25.434386   29367 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa Username:docker}
	I0505 21:16:25.452948   29367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37633
	I0505 21:16:25.453407   29367 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:16:25.454001   29367 main.go:141] libmachine: Using API Version  1
	I0505 21:16:25.454024   29367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:16:25.454507   29367 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:16:25.454716   29367 main.go:141] libmachine: (ha-322980) Calling .GetState
	I0505 21:16:25.456452   29367 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:16:25.456729   29367 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0505 21:16:25.456743   29367 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0505 21:16:25.456755   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:16:25.460063   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:16:25.460505   29367 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:16:25.460524   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:16:25.460705   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:16:25.460870   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:16:25.461048   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:16:25.461184   29367 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa Username:docker}
	I0505 21:16:25.583707   29367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0505 21:16:25.694966   29367 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0505 21:16:25.785183   29367 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0505 21:16:26.222314   29367 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0505 21:16:26.624176   29367 main.go:141] libmachine: Making call to close driver server
	I0505 21:16:26.624200   29367 main.go:141] libmachine: (ha-322980) Calling .Close
	I0505 21:16:26.624318   29367 main.go:141] libmachine: Making call to close driver server
	I0505 21:16:26.624330   29367 main.go:141] libmachine: (ha-322980) Calling .Close
	I0505 21:16:26.624526   29367 main.go:141] libmachine: Successfully made call to close driver server
	I0505 21:16:26.624546   29367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 21:16:26.624556   29367 main.go:141] libmachine: Making call to close driver server
	I0505 21:16:26.624564   29367 main.go:141] libmachine: (ha-322980) Calling .Close
	I0505 21:16:26.624658   29367 main.go:141] libmachine: (ha-322980) DBG | Closing plugin on server side
	I0505 21:16:26.624710   29367 main.go:141] libmachine: Successfully made call to close driver server
	I0505 21:16:26.624728   29367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 21:16:26.624754   29367 main.go:141] libmachine: Making call to close driver server
	I0505 21:16:26.624763   29367 main.go:141] libmachine: (ha-322980) Calling .Close
	I0505 21:16:26.624823   29367 main.go:141] libmachine: Successfully made call to close driver server
	I0505 21:16:26.624853   29367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 21:16:26.624966   29367 main.go:141] libmachine: (ha-322980) DBG | Closing plugin on server side
	I0505 21:16:26.625009   29367 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0505 21:16:26.625017   29367 round_trippers.go:469] Request Headers:
	I0505 21:16:26.625027   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:16:26.625033   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:16:26.625051   29367 main.go:141] libmachine: (ha-322980) DBG | Closing plugin on server side
	I0505 21:16:26.625133   29367 main.go:141] libmachine: Successfully made call to close driver server
	I0505 21:16:26.625179   29367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 21:16:26.637795   29367 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0505 21:16:26.638368   29367 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0505 21:16:26.638385   29367 round_trippers.go:469] Request Headers:
	I0505 21:16:26.638393   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:16:26.638398   29367 round_trippers.go:473]     Content-Type: application/json
	I0505 21:16:26.638401   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:16:26.641594   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:16:26.642094   29367 main.go:141] libmachine: Making call to close driver server
	I0505 21:16:26.642108   29367 main.go:141] libmachine: (ha-322980) Calling .Close
	I0505 21:16:26.642446   29367 main.go:141] libmachine: Successfully made call to close driver server
	I0505 21:16:26.642466   29367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 21:16:26.644455   29367 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0505 21:16:26.645767   29367 addons.go:510] duration metric: took 1.26173268s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0505 21:16:26.645813   29367 start.go:245] waiting for cluster config update ...
	I0505 21:16:26.645829   29367 start.go:254] writing updated cluster config ...
	I0505 21:16:26.647406   29367 out.go:177] 
	I0505 21:16:26.648783   29367 config.go:182] Loaded profile config "ha-322980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 21:16:26.648891   29367 profile.go:143] Saving config to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/config.json ...
	I0505 21:16:26.650599   29367 out.go:177] * Starting "ha-322980-m02" control-plane node in "ha-322980" cluster
	I0505 21:16:26.652020   29367 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0505 21:16:26.652049   29367 cache.go:56] Caching tarball of preloaded images
	I0505 21:16:26.652154   29367 preload.go:173] Found /home/jenkins/minikube-integration/18602-11466/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0505 21:16:26.652170   29367 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0505 21:16:26.652280   29367 profile.go:143] Saving config to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/config.json ...
	I0505 21:16:26.652499   29367 start.go:360] acquireMachinesLock for ha-322980-m02: {Name:mk7f653ea73351d572a4896c6b37d2e7aa44b4ac Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 21:16:26.652560   29367 start.go:364] duration metric: took 33.568µs to acquireMachinesLock for "ha-322980-m02"
	I0505 21:16:26.652585   29367 start.go:93] Provisioning new machine with config: &{Name:ha-322980 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:ha-322980 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.178 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0505 21:16:26.652691   29367 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0505 21:16:26.654570   29367 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0505 21:16:26.654684   29367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:16:26.654729   29367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:16:26.669319   29367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44057
	I0505 21:16:26.669732   29367 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:16:26.670192   29367 main.go:141] libmachine: Using API Version  1
	I0505 21:16:26.670222   29367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:16:26.670564   29367 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:16:26.670808   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetMachineName
	I0505 21:16:26.670987   29367 main.go:141] libmachine: (ha-322980-m02) Calling .DriverName
	I0505 21:16:26.671171   29367 start.go:159] libmachine.API.Create for "ha-322980" (driver="kvm2")
	I0505 21:16:26.671204   29367 client.go:168] LocalClient.Create starting
	I0505 21:16:26.671243   29367 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem
	I0505 21:16:26.671287   29367 main.go:141] libmachine: Decoding PEM data...
	I0505 21:16:26.671309   29367 main.go:141] libmachine: Parsing certificate...
	I0505 21:16:26.671374   29367 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem
	I0505 21:16:26.671401   29367 main.go:141] libmachine: Decoding PEM data...
	I0505 21:16:26.671418   29367 main.go:141] libmachine: Parsing certificate...
	I0505 21:16:26.671442   29367 main.go:141] libmachine: Running pre-create checks...
	I0505 21:16:26.671454   29367 main.go:141] libmachine: (ha-322980-m02) Calling .PreCreateCheck
	I0505 21:16:26.671672   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetConfigRaw
	I0505 21:16:26.672146   29367 main.go:141] libmachine: Creating machine...
	I0505 21:16:26.672164   29367 main.go:141] libmachine: (ha-322980-m02) Calling .Create
	I0505 21:16:26.672317   29367 main.go:141] libmachine: (ha-322980-m02) Creating KVM machine...
	I0505 21:16:26.673647   29367 main.go:141] libmachine: (ha-322980-m02) DBG | found existing default KVM network
	I0505 21:16:26.673752   29367 main.go:141] libmachine: (ha-322980-m02) DBG | found existing private KVM network mk-ha-322980
	I0505 21:16:26.673890   29367 main.go:141] libmachine: (ha-322980-m02) Setting up store path in /home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m02 ...
	I0505 21:16:26.673913   29367 main.go:141] libmachine: (ha-322980-m02) Building disk image from file:///home/jenkins/minikube-integration/18602-11466/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso
	I0505 21:16:26.673985   29367 main.go:141] libmachine: (ha-322980-m02) DBG | I0505 21:16:26.673869   29784 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18602-11466/.minikube
	I0505 21:16:26.674089   29367 main.go:141] libmachine: (ha-322980-m02) Downloading /home/jenkins/minikube-integration/18602-11466/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18602-11466/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso...
	I0505 21:16:26.889974   29367 main.go:141] libmachine: (ha-322980-m02) DBG | I0505 21:16:26.889821   29784 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m02/id_rsa...
	I0505 21:16:27.045565   29367 main.go:141] libmachine: (ha-322980-m02) DBG | I0505 21:16:27.045423   29784 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m02/ha-322980-m02.rawdisk...
	I0505 21:16:27.045619   29367 main.go:141] libmachine: (ha-322980-m02) DBG | Writing magic tar header
	I0505 21:16:27.045630   29367 main.go:141] libmachine: (ha-322980-m02) DBG | Writing SSH key tar header
	I0505 21:16:27.045643   29367 main.go:141] libmachine: (ha-322980-m02) DBG | I0505 21:16:27.045539   29784 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m02 ...
	I0505 21:16:27.045665   29367 main.go:141] libmachine: (ha-322980-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m02
	I0505 21:16:27.045685   29367 main.go:141] libmachine: (ha-322980-m02) Setting executable bit set on /home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m02 (perms=drwx------)
	I0505 21:16:27.045699   29367 main.go:141] libmachine: (ha-322980-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18602-11466/.minikube/machines
	I0505 21:16:27.045726   29367 main.go:141] libmachine: (ha-322980-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18602-11466/.minikube
	I0505 21:16:27.045735   29367 main.go:141] libmachine: (ha-322980-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18602-11466
	I0505 21:16:27.045749   29367 main.go:141] libmachine: (ha-322980-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0505 21:16:27.045761   29367 main.go:141] libmachine: (ha-322980-m02) DBG | Checking permissions on dir: /home/jenkins
	I0505 21:16:27.045792   29367 main.go:141] libmachine: (ha-322980-m02) Setting executable bit set on /home/jenkins/minikube-integration/18602-11466/.minikube/machines (perms=drwxr-xr-x)
	I0505 21:16:27.045813   29367 main.go:141] libmachine: (ha-322980-m02) DBG | Checking permissions on dir: /home
	I0505 21:16:27.045820   29367 main.go:141] libmachine: (ha-322980-m02) Setting executable bit set on /home/jenkins/minikube-integration/18602-11466/.minikube (perms=drwxr-xr-x)
	I0505 21:16:27.045833   29367 main.go:141] libmachine: (ha-322980-m02) Setting executable bit set on /home/jenkins/minikube-integration/18602-11466 (perms=drwxrwxr-x)
	I0505 21:16:27.045847   29367 main.go:141] libmachine: (ha-322980-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0505 21:16:27.045859   29367 main.go:141] libmachine: (ha-322980-m02) DBG | Skipping /home - not owner
	I0505 21:16:27.045872   29367 main.go:141] libmachine: (ha-322980-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0505 21:16:27.045881   29367 main.go:141] libmachine: (ha-322980-m02) Creating domain...
	I0505 21:16:27.046595   29367 main.go:141] libmachine: (ha-322980-m02) define libvirt domain using xml: 
	I0505 21:16:27.046614   29367 main.go:141] libmachine: (ha-322980-m02) <domain type='kvm'>
	I0505 21:16:27.046623   29367 main.go:141] libmachine: (ha-322980-m02)   <name>ha-322980-m02</name>
	I0505 21:16:27.046634   29367 main.go:141] libmachine: (ha-322980-m02)   <memory unit='MiB'>2200</memory>
	I0505 21:16:27.046646   29367 main.go:141] libmachine: (ha-322980-m02)   <vcpu>2</vcpu>
	I0505 21:16:27.046651   29367 main.go:141] libmachine: (ha-322980-m02)   <features>
	I0505 21:16:27.046659   29367 main.go:141] libmachine: (ha-322980-m02)     <acpi/>
	I0505 21:16:27.046664   29367 main.go:141] libmachine: (ha-322980-m02)     <apic/>
	I0505 21:16:27.046669   29367 main.go:141] libmachine: (ha-322980-m02)     <pae/>
	I0505 21:16:27.046673   29367 main.go:141] libmachine: (ha-322980-m02)     
	I0505 21:16:27.046678   29367 main.go:141] libmachine: (ha-322980-m02)   </features>
	I0505 21:16:27.046686   29367 main.go:141] libmachine: (ha-322980-m02)   <cpu mode='host-passthrough'>
	I0505 21:16:27.046692   29367 main.go:141] libmachine: (ha-322980-m02)   
	I0505 21:16:27.046702   29367 main.go:141] libmachine: (ha-322980-m02)   </cpu>
	I0505 21:16:27.046722   29367 main.go:141] libmachine: (ha-322980-m02)   <os>
	I0505 21:16:27.046740   29367 main.go:141] libmachine: (ha-322980-m02)     <type>hvm</type>
	I0505 21:16:27.046752   29367 main.go:141] libmachine: (ha-322980-m02)     <boot dev='cdrom'/>
	I0505 21:16:27.046765   29367 main.go:141] libmachine: (ha-322980-m02)     <boot dev='hd'/>
	I0505 21:16:27.046775   29367 main.go:141] libmachine: (ha-322980-m02)     <bootmenu enable='no'/>
	I0505 21:16:27.046781   29367 main.go:141] libmachine: (ha-322980-m02)   </os>
	I0505 21:16:27.046786   29367 main.go:141] libmachine: (ha-322980-m02)   <devices>
	I0505 21:16:27.046795   29367 main.go:141] libmachine: (ha-322980-m02)     <disk type='file' device='cdrom'>
	I0505 21:16:27.046805   29367 main.go:141] libmachine: (ha-322980-m02)       <source file='/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m02/boot2docker.iso'/>
	I0505 21:16:27.046813   29367 main.go:141] libmachine: (ha-322980-m02)       <target dev='hdc' bus='scsi'/>
	I0505 21:16:27.046820   29367 main.go:141] libmachine: (ha-322980-m02)       <readonly/>
	I0505 21:16:27.046826   29367 main.go:141] libmachine: (ha-322980-m02)     </disk>
	I0505 21:16:27.046833   29367 main.go:141] libmachine: (ha-322980-m02)     <disk type='file' device='disk'>
	I0505 21:16:27.046843   29367 main.go:141] libmachine: (ha-322980-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0505 21:16:27.046871   29367 main.go:141] libmachine: (ha-322980-m02)       <source file='/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m02/ha-322980-m02.rawdisk'/>
	I0505 21:16:27.046892   29367 main.go:141] libmachine: (ha-322980-m02)       <target dev='hda' bus='virtio'/>
	I0505 21:16:27.046900   29367 main.go:141] libmachine: (ha-322980-m02)     </disk>
	I0505 21:16:27.046904   29367 main.go:141] libmachine: (ha-322980-m02)     <interface type='network'>
	I0505 21:16:27.046910   29367 main.go:141] libmachine: (ha-322980-m02)       <source network='mk-ha-322980'/>
	I0505 21:16:27.046929   29367 main.go:141] libmachine: (ha-322980-m02)       <model type='virtio'/>
	I0505 21:16:27.046938   29367 main.go:141] libmachine: (ha-322980-m02)     </interface>
	I0505 21:16:27.046943   29367 main.go:141] libmachine: (ha-322980-m02)     <interface type='network'>
	I0505 21:16:27.046949   29367 main.go:141] libmachine: (ha-322980-m02)       <source network='default'/>
	I0505 21:16:27.046956   29367 main.go:141] libmachine: (ha-322980-m02)       <model type='virtio'/>
	I0505 21:16:27.046962   29367 main.go:141] libmachine: (ha-322980-m02)     </interface>
	I0505 21:16:27.046967   29367 main.go:141] libmachine: (ha-322980-m02)     <serial type='pty'>
	I0505 21:16:27.046973   29367 main.go:141] libmachine: (ha-322980-m02)       <target port='0'/>
	I0505 21:16:27.046980   29367 main.go:141] libmachine: (ha-322980-m02)     </serial>
	I0505 21:16:27.046986   29367 main.go:141] libmachine: (ha-322980-m02)     <console type='pty'>
	I0505 21:16:27.046995   29367 main.go:141] libmachine: (ha-322980-m02)       <target type='serial' port='0'/>
	I0505 21:16:27.047023   29367 main.go:141] libmachine: (ha-322980-m02)     </console>
	I0505 21:16:27.047046   29367 main.go:141] libmachine: (ha-322980-m02)     <rng model='virtio'>
	I0505 21:16:27.047061   29367 main.go:141] libmachine: (ha-322980-m02)       <backend model='random'>/dev/random</backend>
	I0505 21:16:27.047070   29367 main.go:141] libmachine: (ha-322980-m02)     </rng>
	I0505 21:16:27.047078   29367 main.go:141] libmachine: (ha-322980-m02)     
	I0505 21:16:27.047088   29367 main.go:141] libmachine: (ha-322980-m02)     
	I0505 21:16:27.047100   29367 main.go:141] libmachine: (ha-322980-m02)   </devices>
	I0505 21:16:27.047114   29367 main.go:141] libmachine: (ha-322980-m02) </domain>
	I0505 21:16:27.047137   29367 main.go:141] libmachine: (ha-322980-m02) 
	I0505 21:16:27.053474   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:a2:9a:5e in network default
	I0505 21:16:27.054066   29367 main.go:141] libmachine: (ha-322980-m02) Ensuring networks are active...
	I0505 21:16:27.054089   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:27.054781   29367 main.go:141] libmachine: (ha-322980-m02) Ensuring network default is active
	I0505 21:16:27.055053   29367 main.go:141] libmachine: (ha-322980-m02) Ensuring network mk-ha-322980 is active
	I0505 21:16:27.055373   29367 main.go:141] libmachine: (ha-322980-m02) Getting domain xml...
	I0505 21:16:27.056030   29367 main.go:141] libmachine: (ha-322980-m02) Creating domain...
	I0505 21:16:28.264297   29367 main.go:141] libmachine: (ha-322980-m02) Waiting to get IP...
	I0505 21:16:28.265277   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:28.265768   29367 main.go:141] libmachine: (ha-322980-m02) DBG | unable to find current IP address of domain ha-322980-m02 in network mk-ha-322980
	I0505 21:16:28.265812   29367 main.go:141] libmachine: (ha-322980-m02) DBG | I0505 21:16:28.265757   29784 retry.go:31] will retry after 218.278648ms: waiting for machine to come up
	I0505 21:16:28.485333   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:28.485945   29367 main.go:141] libmachine: (ha-322980-m02) DBG | unable to find current IP address of domain ha-322980-m02 in network mk-ha-322980
	I0505 21:16:28.485972   29367 main.go:141] libmachine: (ha-322980-m02) DBG | I0505 21:16:28.485893   29784 retry.go:31] will retry after 357.838703ms: waiting for machine to come up
	I0505 21:16:28.845674   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:28.846151   29367 main.go:141] libmachine: (ha-322980-m02) DBG | unable to find current IP address of domain ha-322980-m02 in network mk-ha-322980
	I0505 21:16:28.846181   29367 main.go:141] libmachine: (ha-322980-m02) DBG | I0505 21:16:28.846100   29784 retry.go:31] will retry after 443.483557ms: waiting for machine to come up
	I0505 21:16:29.293044   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:29.293529   29367 main.go:141] libmachine: (ha-322980-m02) DBG | unable to find current IP address of domain ha-322980-m02 in network mk-ha-322980
	I0505 21:16:29.293553   29367 main.go:141] libmachine: (ha-322980-m02) DBG | I0505 21:16:29.293488   29784 retry.go:31] will retry after 526.787702ms: waiting for machine to come up
	I0505 21:16:29.822198   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:29.822556   29367 main.go:141] libmachine: (ha-322980-m02) DBG | unable to find current IP address of domain ha-322980-m02 in network mk-ha-322980
	I0505 21:16:29.822595   29367 main.go:141] libmachine: (ha-322980-m02) DBG | I0505 21:16:29.822513   29784 retry.go:31] will retry after 458.871695ms: waiting for machine to come up
	I0505 21:16:30.283446   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:30.283853   29367 main.go:141] libmachine: (ha-322980-m02) DBG | unable to find current IP address of domain ha-322980-m02 in network mk-ha-322980
	I0505 21:16:30.283873   29367 main.go:141] libmachine: (ha-322980-m02) DBG | I0505 21:16:30.283823   29784 retry.go:31] will retry after 611.219423ms: waiting for machine to come up
	I0505 21:16:30.896969   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:30.897428   29367 main.go:141] libmachine: (ha-322980-m02) DBG | unable to find current IP address of domain ha-322980-m02 in network mk-ha-322980
	I0505 21:16:30.897458   29367 main.go:141] libmachine: (ha-322980-m02) DBG | I0505 21:16:30.897368   29784 retry.go:31] will retry after 1.100483339s: waiting for machine to come up
	I0505 21:16:31.999907   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:32.000354   29367 main.go:141] libmachine: (ha-322980-m02) DBG | unable to find current IP address of domain ha-322980-m02 in network mk-ha-322980
	I0505 21:16:32.000391   29367 main.go:141] libmachine: (ha-322980-m02) DBG | I0505 21:16:32.000332   29784 retry.go:31] will retry after 1.25923991s: waiting for machine to come up
	I0505 21:16:33.261662   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:33.262111   29367 main.go:141] libmachine: (ha-322980-m02) DBG | unable to find current IP address of domain ha-322980-m02 in network mk-ha-322980
	I0505 21:16:33.262139   29367 main.go:141] libmachine: (ha-322980-m02) DBG | I0505 21:16:33.262046   29784 retry.go:31] will retry after 1.398082567s: waiting for machine to come up
	I0505 21:16:34.662648   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:34.663130   29367 main.go:141] libmachine: (ha-322980-m02) DBG | unable to find current IP address of domain ha-322980-m02 in network mk-ha-322980
	I0505 21:16:34.663157   29367 main.go:141] libmachine: (ha-322980-m02) DBG | I0505 21:16:34.663082   29784 retry.go:31] will retry after 2.195675763s: waiting for machine to come up
	I0505 21:16:36.860415   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:36.860874   29367 main.go:141] libmachine: (ha-322980-m02) DBG | unable to find current IP address of domain ha-322980-m02 in network mk-ha-322980
	I0505 21:16:36.860904   29367 main.go:141] libmachine: (ha-322980-m02) DBG | I0505 21:16:36.860816   29784 retry.go:31] will retry after 2.407725991s: waiting for machine to come up
	I0505 21:16:39.269961   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:39.270455   29367 main.go:141] libmachine: (ha-322980-m02) DBG | unable to find current IP address of domain ha-322980-m02 in network mk-ha-322980
	I0505 21:16:39.270488   29367 main.go:141] libmachine: (ha-322980-m02) DBG | I0505 21:16:39.270370   29784 retry.go:31] will retry after 2.806944631s: waiting for machine to come up
	I0505 21:16:42.079610   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:42.079993   29367 main.go:141] libmachine: (ha-322980-m02) DBG | unable to find current IP address of domain ha-322980-m02 in network mk-ha-322980
	I0505 21:16:42.080019   29367 main.go:141] libmachine: (ha-322980-m02) DBG | I0505 21:16:42.079955   29784 retry.go:31] will retry after 3.727124624s: waiting for machine to come up
	I0505 21:16:45.812094   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:45.812553   29367 main.go:141] libmachine: (ha-322980-m02) DBG | unable to find current IP address of domain ha-322980-m02 in network mk-ha-322980
	I0505 21:16:45.812580   29367 main.go:141] libmachine: (ha-322980-m02) DBG | I0505 21:16:45.812502   29784 retry.go:31] will retry after 5.548395809s: waiting for machine to come up
	I0505 21:16:51.364646   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:51.365085   29367 main.go:141] libmachine: (ha-322980-m02) Found IP for machine: 192.168.39.228
	I0505 21:16:51.365105   29367 main.go:141] libmachine: (ha-322980-m02) Reserving static IP address...
	I0505 21:16:51.365115   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has current primary IP address 192.168.39.228 and MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:51.365563   29367 main.go:141] libmachine: (ha-322980-m02) DBG | unable to find host DHCP lease matching {name: "ha-322980-m02", mac: "52:54:00:91:59:b4", ip: "192.168.39.228"} in network mk-ha-322980
	I0505 21:16:51.435239   29367 main.go:141] libmachine: (ha-322980-m02) DBG | Getting to WaitForSSH function...
	I0505 21:16:51.435274   29367 main.go:141] libmachine: (ha-322980-m02) Reserved static IP address: 192.168.39.228
	I0505 21:16:51.435287   29367 main.go:141] libmachine: (ha-322980-m02) Waiting for SSH to be available...
	I0505 21:16:51.437836   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:51.438330   29367 main.go:141] libmachine: (ha-322980-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:59:b4", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:16:42 +0000 UTC Type:0 Mac:52:54:00:91:59:b4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:minikube Clientid:01:52:54:00:91:59:b4}
	I0505 21:16:51.438351   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:51.438466   29367 main.go:141] libmachine: (ha-322980-m02) DBG | Using SSH client type: external
	I0505 21:16:51.438491   29367 main.go:141] libmachine: (ha-322980-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m02/id_rsa (-rw-------)
	I0505 21:16:51.438564   29367 main.go:141] libmachine: (ha-322980-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.228 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0505 21:16:51.438598   29367 main.go:141] libmachine: (ha-322980-m02) DBG | About to run SSH command:
	I0505 21:16:51.438618   29367 main.go:141] libmachine: (ha-322980-m02) DBG | exit 0
	I0505 21:16:51.567511   29367 main.go:141] libmachine: (ha-322980-m02) DBG | SSH cmd err, output: <nil>: 
	I0505 21:16:51.567784   29367 main.go:141] libmachine: (ha-322980-m02) KVM machine creation complete!
	I0505 21:16:51.568084   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetConfigRaw
	I0505 21:16:51.568642   29367 main.go:141] libmachine: (ha-322980-m02) Calling .DriverName
	I0505 21:16:51.568841   29367 main.go:141] libmachine: (ha-322980-m02) Calling .DriverName
	I0505 21:16:51.569057   29367 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0505 21:16:51.569078   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetState
	I0505 21:16:51.570245   29367 main.go:141] libmachine: Detecting operating system of created instance...
	I0505 21:16:51.570261   29367 main.go:141] libmachine: Waiting for SSH to be available...
	I0505 21:16:51.570268   29367 main.go:141] libmachine: Getting to WaitForSSH function...
	I0505 21:16:51.570276   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHHostname
	I0505 21:16:51.572647   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:51.573050   29367 main.go:141] libmachine: (ha-322980-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:59:b4", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:16:42 +0000 UTC Type:0 Mac:52:54:00:91:59:b4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-322980-m02 Clientid:01:52:54:00:91:59:b4}
	I0505 21:16:51.573078   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:51.573239   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHPort
	I0505 21:16:51.573429   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHKeyPath
	I0505 21:16:51.573554   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHKeyPath
	I0505 21:16:51.573703   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHUsername
	I0505 21:16:51.573897   29367 main.go:141] libmachine: Using SSH client type: native
	I0505 21:16:51.574127   29367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0505 21:16:51.574144   29367 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0505 21:16:51.683516   29367 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0505 21:16:51.683541   29367 main.go:141] libmachine: Detecting the provisioner...
	I0505 21:16:51.683551   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHHostname
	I0505 21:16:51.686290   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:51.686643   29367 main.go:141] libmachine: (ha-322980-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:59:b4", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:16:42 +0000 UTC Type:0 Mac:52:54:00:91:59:b4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-322980-m02 Clientid:01:52:54:00:91:59:b4}
	I0505 21:16:51.686683   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:51.686821   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHPort
	I0505 21:16:51.687014   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHKeyPath
	I0505 21:16:51.687163   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHKeyPath
	I0505 21:16:51.687301   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHUsername
	I0505 21:16:51.687439   29367 main.go:141] libmachine: Using SSH client type: native
	I0505 21:16:51.687619   29367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0505 21:16:51.687631   29367 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0505 21:16:51.796608   29367 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0505 21:16:51.796686   29367 main.go:141] libmachine: found compatible host: buildroot
	I0505 21:16:51.796701   29367 main.go:141] libmachine: Provisioning with buildroot...
	I0505 21:16:51.796712   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetMachineName
	I0505 21:16:51.796966   29367 buildroot.go:166] provisioning hostname "ha-322980-m02"
	I0505 21:16:51.796991   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetMachineName
	I0505 21:16:51.797188   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHHostname
	I0505 21:16:51.799655   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:51.800009   29367 main.go:141] libmachine: (ha-322980-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:59:b4", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:16:42 +0000 UTC Type:0 Mac:52:54:00:91:59:b4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-322980-m02 Clientid:01:52:54:00:91:59:b4}
	I0505 21:16:51.800052   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:51.800195   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHPort
	I0505 21:16:51.800373   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHKeyPath
	I0505 21:16:51.800545   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHKeyPath
	I0505 21:16:51.800687   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHUsername
	I0505 21:16:51.800857   29367 main.go:141] libmachine: Using SSH client type: native
	I0505 21:16:51.801031   29367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0505 21:16:51.801045   29367 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-322980-m02 && echo "ha-322980-m02" | sudo tee /etc/hostname
	I0505 21:16:51.925690   29367 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-322980-m02
	
	I0505 21:16:51.925718   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHHostname
	I0505 21:16:51.928452   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:51.928818   29367 main.go:141] libmachine: (ha-322980-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:59:b4", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:16:42 +0000 UTC Type:0 Mac:52:54:00:91:59:b4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-322980-m02 Clientid:01:52:54:00:91:59:b4}
	I0505 21:16:51.928847   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:51.929034   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHPort
	I0505 21:16:51.929240   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHKeyPath
	I0505 21:16:51.929418   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHKeyPath
	I0505 21:16:51.929596   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHUsername
	I0505 21:16:51.929764   29367 main.go:141] libmachine: Using SSH client type: native
	I0505 21:16:51.929957   29367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0505 21:16:51.929981   29367 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-322980-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-322980-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-322980-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0505 21:16:52.050564   29367 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0505 21:16:52.050592   29367 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18602-11466/.minikube CaCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18602-11466/.minikube}
	I0505 21:16:52.050623   29367 buildroot.go:174] setting up certificates
	I0505 21:16:52.050635   29367 provision.go:84] configureAuth start
	I0505 21:16:52.050664   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetMachineName
	I0505 21:16:52.050929   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetIP
	I0505 21:16:52.053658   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:52.053995   29367 main.go:141] libmachine: (ha-322980-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:59:b4", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:16:42 +0000 UTC Type:0 Mac:52:54:00:91:59:b4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-322980-m02 Clientid:01:52:54:00:91:59:b4}
	I0505 21:16:52.054022   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:52.054179   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHHostname
	I0505 21:16:52.056345   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:52.056742   29367 main.go:141] libmachine: (ha-322980-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:59:b4", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:16:42 +0000 UTC Type:0 Mac:52:54:00:91:59:b4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-322980-m02 Clientid:01:52:54:00:91:59:b4}
	I0505 21:16:52.056785   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:52.056933   29367 provision.go:143] copyHostCerts
	I0505 21:16:52.056963   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem
	I0505 21:16:52.057002   29367 exec_runner.go:144] found /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem, removing ...
	I0505 21:16:52.057015   29367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem
	I0505 21:16:52.057124   29367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem (1078 bytes)
	I0505 21:16:52.057244   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem
	I0505 21:16:52.057279   29367 exec_runner.go:144] found /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem, removing ...
	I0505 21:16:52.057291   29367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem
	I0505 21:16:52.057333   29367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem (1123 bytes)
	I0505 21:16:52.057423   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem
	I0505 21:16:52.057452   29367 exec_runner.go:144] found /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem, removing ...
	I0505 21:16:52.057460   29367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem
	I0505 21:16:52.057495   29367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem (1675 bytes)
	I0505 21:16:52.057591   29367 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem org=jenkins.ha-322980-m02 san=[127.0.0.1 192.168.39.228 ha-322980-m02 localhost minikube]
	I0505 21:16:52.379058   29367 provision.go:177] copyRemoteCerts
	I0505 21:16:52.379126   29367 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0505 21:16:52.379157   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHHostname
	I0505 21:16:52.381743   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:52.382033   29367 main.go:141] libmachine: (ha-322980-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:59:b4", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:16:42 +0000 UTC Type:0 Mac:52:54:00:91:59:b4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-322980-m02 Clientid:01:52:54:00:91:59:b4}
	I0505 21:16:52.382055   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:52.382240   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHPort
	I0505 21:16:52.382430   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHKeyPath
	I0505 21:16:52.382567   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHUsername
	I0505 21:16:52.382695   29367 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m02/id_rsa Username:docker}
	I0505 21:16:52.467046   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0505 21:16:52.467173   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0505 21:16:52.495979   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0505 21:16:52.496050   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0505 21:16:52.521847   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0505 21:16:52.521908   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0505 21:16:52.548671   29367 provision.go:87] duration metric: took 498.021001ms to configureAuth
	I0505 21:16:52.548705   29367 buildroot.go:189] setting minikube options for container-runtime
	I0505 21:16:52.548932   29367 config.go:182] Loaded profile config "ha-322980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 21:16:52.549017   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHHostname
	I0505 21:16:52.551653   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:52.552024   29367 main.go:141] libmachine: (ha-322980-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:59:b4", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:16:42 +0000 UTC Type:0 Mac:52:54:00:91:59:b4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-322980-m02 Clientid:01:52:54:00:91:59:b4}
	I0505 21:16:52.552052   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:52.552252   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHPort
	I0505 21:16:52.552447   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHKeyPath
	I0505 21:16:52.552591   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHKeyPath
	I0505 21:16:52.552711   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHUsername
	I0505 21:16:52.552940   29367 main.go:141] libmachine: Using SSH client type: native
	I0505 21:16:52.553095   29367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0505 21:16:52.553115   29367 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0505 21:16:52.834425   29367 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0505 21:16:52.834461   29367 main.go:141] libmachine: Checking connection to Docker...
	I0505 21:16:52.834473   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetURL
	I0505 21:16:52.835752   29367 main.go:141] libmachine: (ha-322980-m02) DBG | Using libvirt version 6000000
	I0505 21:16:52.838267   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:52.838630   29367 main.go:141] libmachine: (ha-322980-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:59:b4", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:16:42 +0000 UTC Type:0 Mac:52:54:00:91:59:b4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-322980-m02 Clientid:01:52:54:00:91:59:b4}
	I0505 21:16:52.838661   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:52.838814   29367 main.go:141] libmachine: Docker is up and running!
	I0505 21:16:52.838831   29367 main.go:141] libmachine: Reticulating splines...
	I0505 21:16:52.838838   29367 client.go:171] duration metric: took 26.167624154s to LocalClient.Create
	I0505 21:16:52.838862   29367 start.go:167] duration metric: took 26.167693485s to libmachine.API.Create "ha-322980"
	I0505 21:16:52.838878   29367 start.go:293] postStartSetup for "ha-322980-m02" (driver="kvm2")
	I0505 21:16:52.838891   29367 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0505 21:16:52.838922   29367 main.go:141] libmachine: (ha-322980-m02) Calling .DriverName
	I0505 21:16:52.839161   29367 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0505 21:16:52.839190   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHHostname
	I0505 21:16:52.841234   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:52.841492   29367 main.go:141] libmachine: (ha-322980-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:59:b4", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:16:42 +0000 UTC Type:0 Mac:52:54:00:91:59:b4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-322980-m02 Clientid:01:52:54:00:91:59:b4}
	I0505 21:16:52.841524   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:52.841633   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHPort
	I0505 21:16:52.841818   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHKeyPath
	I0505 21:16:52.842002   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHUsername
	I0505 21:16:52.842139   29367 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m02/id_rsa Username:docker}
	I0505 21:16:52.929492   29367 ssh_runner.go:195] Run: cat /etc/os-release
	I0505 21:16:52.934730   29367 info.go:137] Remote host: Buildroot 2023.02.9
	I0505 21:16:52.934753   29367 filesync.go:126] Scanning /home/jenkins/minikube-integration/18602-11466/.minikube/addons for local assets ...
	I0505 21:16:52.934827   29367 filesync.go:126] Scanning /home/jenkins/minikube-integration/18602-11466/.minikube/files for local assets ...
	I0505 21:16:52.934909   29367 filesync.go:149] local asset: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem -> 187982.pem in /etc/ssl/certs
	I0505 21:16:52.934921   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem -> /etc/ssl/certs/187982.pem
	I0505 21:16:52.935015   29367 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0505 21:16:52.947700   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem --> /etc/ssl/certs/187982.pem (1708 bytes)
	I0505 21:16:52.975618   29367 start.go:296] duration metric: took 136.725548ms for postStartSetup
	I0505 21:16:52.975750   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetConfigRaw
	I0505 21:16:52.976327   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetIP
	I0505 21:16:52.979170   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:52.979558   29367 main.go:141] libmachine: (ha-322980-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:59:b4", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:16:42 +0000 UTC Type:0 Mac:52:54:00:91:59:b4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-322980-m02 Clientid:01:52:54:00:91:59:b4}
	I0505 21:16:52.979588   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:52.979776   29367 profile.go:143] Saving config to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/config.json ...
	I0505 21:16:52.979943   29367 start.go:128] duration metric: took 26.327239423s to createHost
	I0505 21:16:52.979963   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHHostname
	I0505 21:16:52.982126   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:52.982548   29367 main.go:141] libmachine: (ha-322980-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:59:b4", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:16:42 +0000 UTC Type:0 Mac:52:54:00:91:59:b4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-322980-m02 Clientid:01:52:54:00:91:59:b4}
	I0505 21:16:52.982584   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:52.982731   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHPort
	I0505 21:16:52.982921   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHKeyPath
	I0505 21:16:52.983066   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHKeyPath
	I0505 21:16:52.983211   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHUsername
	I0505 21:16:52.983418   29367 main.go:141] libmachine: Using SSH client type: native
	I0505 21:16:52.983623   29367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0505 21:16:52.983637   29367 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0505 21:16:53.092793   29367 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714943813.078239933
	
	I0505 21:16:53.092814   29367 fix.go:216] guest clock: 1714943813.078239933
	I0505 21:16:53.092825   29367 fix.go:229] Guest: 2024-05-05 21:16:53.078239933 +0000 UTC Remote: 2024-05-05 21:16:52.979953804 +0000 UTC m=+84.842713381 (delta=98.286129ms)
	I0505 21:16:53.092843   29367 fix.go:200] guest clock delta is within tolerance: 98.286129ms
	I0505 21:16:53.092849   29367 start.go:83] releasing machines lock for "ha-322980-m02", held for 26.44027621s
	I0505 21:16:53.092873   29367 main.go:141] libmachine: (ha-322980-m02) Calling .DriverName
	I0505 21:16:53.093108   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetIP
	I0505 21:16:53.095332   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:53.095797   29367 main.go:141] libmachine: (ha-322980-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:59:b4", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:16:42 +0000 UTC Type:0 Mac:52:54:00:91:59:b4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-322980-m02 Clientid:01:52:54:00:91:59:b4}
	I0505 21:16:53.095828   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:53.098033   29367 out.go:177] * Found network options:
	I0505 21:16:53.099371   29367 out.go:177]   - NO_PROXY=192.168.39.178
	W0505 21:16:53.100556   29367 proxy.go:119] fail to check proxy env: Error ip not in block
	I0505 21:16:53.100592   29367 main.go:141] libmachine: (ha-322980-m02) Calling .DriverName
	I0505 21:16:53.101074   29367 main.go:141] libmachine: (ha-322980-m02) Calling .DriverName
	I0505 21:16:53.101287   29367 main.go:141] libmachine: (ha-322980-m02) Calling .DriverName
	I0505 21:16:53.101369   29367 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0505 21:16:53.101410   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHHostname
	W0505 21:16:53.101489   29367 proxy.go:119] fail to check proxy env: Error ip not in block
	I0505 21:16:53.101560   29367 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0505 21:16:53.101582   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHHostname
	I0505 21:16:53.103970   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:53.104306   29367 main.go:141] libmachine: (ha-322980-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:59:b4", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:16:42 +0000 UTC Type:0 Mac:52:54:00:91:59:b4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-322980-m02 Clientid:01:52:54:00:91:59:b4}
	I0505 21:16:53.104334   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:53.104444   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:53.104513   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHPort
	I0505 21:16:53.104753   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHKeyPath
	I0505 21:16:53.104898   29367 main.go:141] libmachine: (ha-322980-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:59:b4", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:16:42 +0000 UTC Type:0 Mac:52:54:00:91:59:b4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-322980-m02 Clientid:01:52:54:00:91:59:b4}
	I0505 21:16:53.104917   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:53.104932   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHUsername
	I0505 21:16:53.105077   29367 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m02/id_rsa Username:docker}
	I0505 21:16:53.105142   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHPort
	I0505 21:16:53.105295   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHKeyPath
	I0505 21:16:53.105530   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHUsername
	I0505 21:16:53.105702   29367 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m02/id_rsa Username:docker}
	I0505 21:16:53.350389   29367 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0505 21:16:53.357679   29367 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0505 21:16:53.357743   29367 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0505 21:16:53.374942   29367 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0505 21:16:53.374965   29367 start.go:494] detecting cgroup driver to use...
	I0505 21:16:53.375033   29367 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0505 21:16:53.392470   29367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0505 21:16:53.406913   29367 docker.go:217] disabling cri-docker service (if available) ...
	I0505 21:16:53.406967   29367 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0505 21:16:53.420841   29367 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0505 21:16:53.434674   29367 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0505 21:16:53.556020   29367 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0505 21:16:53.698587   29367 docker.go:233] disabling docker service ...
	I0505 21:16:53.698651   29367 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0505 21:16:53.716510   29367 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0505 21:16:53.731576   29367 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0505 21:16:53.877152   29367 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0505 21:16:53.991713   29367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0505 21:16:54.007884   29367 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0505 21:16:54.029276   29367 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0505 21:16:54.029330   29367 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:16:54.041610   29367 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0505 21:16:54.041671   29367 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:16:54.053411   29367 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:16:54.064311   29367 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:16:54.075235   29367 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0505 21:16:54.086120   29367 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:16:54.098550   29367 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:16:54.117350   29367 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:16:54.128050   29367 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0505 21:16:54.137866   29367 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0505 21:16:54.137913   29367 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0505 21:16:54.152227   29367 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0505 21:16:54.162712   29367 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 21:16:54.280446   29367 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0505 21:16:54.435248   29367 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0505 21:16:54.435317   29367 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0505 21:16:54.442224   29367 start.go:562] Will wait 60s for crictl version
	I0505 21:16:54.442286   29367 ssh_runner.go:195] Run: which crictl
	I0505 21:16:54.446568   29367 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0505 21:16:54.486604   29367 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0505 21:16:54.486669   29367 ssh_runner.go:195] Run: crio --version
	I0505 21:16:54.521653   29367 ssh_runner.go:195] Run: crio --version
	I0505 21:16:54.557850   29367 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0505 21:16:54.559337   29367 out.go:177]   - env NO_PROXY=192.168.39.178
	I0505 21:16:54.560303   29367 main.go:141] libmachine: (ha-322980-m02) Calling .GetIP
	I0505 21:16:54.562636   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:54.562931   29367 main.go:141] libmachine: (ha-322980-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:59:b4", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:16:42 +0000 UTC Type:0 Mac:52:54:00:91:59:b4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-322980-m02 Clientid:01:52:54:00:91:59:b4}
	I0505 21:16:54.562958   29367 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:16:54.563214   29367 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0505 21:16:54.567662   29367 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0505 21:16:54.581456   29367 mustload.go:65] Loading cluster: ha-322980
	I0505 21:16:54.581648   29367 config.go:182] Loaded profile config "ha-322980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 21:16:54.582020   29367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:16:54.582062   29367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:16:54.596154   29367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36301
	I0505 21:16:54.596542   29367 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:16:54.596986   29367 main.go:141] libmachine: Using API Version  1
	I0505 21:16:54.597013   29367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:16:54.597342   29367 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:16:54.597559   29367 main.go:141] libmachine: (ha-322980) Calling .GetState
	I0505 21:16:54.598966   29367 host.go:66] Checking if "ha-322980" exists ...
	I0505 21:16:54.599233   29367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:16:54.599256   29367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:16:54.613190   29367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41921
	I0505 21:16:54.613605   29367 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:16:54.614051   29367 main.go:141] libmachine: Using API Version  1
	I0505 21:16:54.614072   29367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:16:54.614317   29367 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:16:54.614500   29367 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:16:54.614659   29367 certs.go:68] Setting up /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980 for IP: 192.168.39.228
	I0505 21:16:54.614672   29367 certs.go:194] generating shared ca certs ...
	I0505 21:16:54.614684   29367 certs.go:226] acquiring lock for ca certs: {Name:mk9a8f97724855697074b08194c6247f8f9e5c84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 21:16:54.614823   29367 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key
	I0505 21:16:54.614870   29367 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key
	I0505 21:16:54.614880   29367 certs.go:256] generating profile certs ...
	I0505 21:16:54.614948   29367 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/client.key
	I0505 21:16:54.614972   29367 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key.e646210b
	I0505 21:16:54.614986   29367 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt.e646210b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.178 192.168.39.228 192.168.39.254]
	I0505 21:16:54.759126   29367 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt.e646210b ...
	I0505 21:16:54.759153   29367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt.e646210b: {Name:mkcf6f675dbe6e4e6e920993380cde57d475599a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 21:16:54.759333   29367 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key.e646210b ...
	I0505 21:16:54.759349   29367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key.e646210b: {Name:mk0f3cf878fab5fa33854f97974df366519b30cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 21:16:54.759450   29367 certs.go:381] copying /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt.e646210b -> /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt
	I0505 21:16:54.759608   29367 certs.go:385] copying /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key.e646210b -> /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key
	I0505 21:16:54.759729   29367 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.key
	I0505 21:16:54.759746   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0505 21:16:54.759758   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0505 21:16:54.759770   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0505 21:16:54.759783   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0505 21:16:54.759795   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0505 21:16:54.759807   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0505 21:16:54.759818   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0505 21:16:54.759830   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0505 21:16:54.759871   29367 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798.pem (1338 bytes)
	W0505 21:16:54.759899   29367 certs.go:480] ignoring /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798_empty.pem, impossibly tiny 0 bytes
	I0505 21:16:54.759908   29367 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem (1675 bytes)
	I0505 21:16:54.759927   29367 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem (1078 bytes)
	I0505 21:16:54.759950   29367 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem (1123 bytes)
	I0505 21:16:54.759970   29367 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem (1675 bytes)
	I0505 21:16:54.760006   29367 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem (1708 bytes)
	I0505 21:16:54.760033   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0505 21:16:54.760046   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798.pem -> /usr/share/ca-certificates/18798.pem
	I0505 21:16:54.760059   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem -> /usr/share/ca-certificates/187982.pem
	I0505 21:16:54.760088   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:16:54.763448   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:16:54.763917   29367 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:16:54.763950   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:16:54.764100   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:16:54.764285   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:16:54.764463   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:16:54.764612   29367 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa Username:docker}
	I0505 21:16:54.839997   29367 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0505 21:16:54.845950   29367 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0505 21:16:54.858185   29367 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0505 21:16:54.862906   29367 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0505 21:16:54.873652   29367 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0505 21:16:54.878184   29367 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0505 21:16:54.888814   29367 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0505 21:16:54.893566   29367 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0505 21:16:54.904232   29367 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0505 21:16:54.908518   29367 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0505 21:16:54.918924   29367 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0505 21:16:54.923157   29367 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0505 21:16:54.935092   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0505 21:16:54.964109   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0505 21:16:54.990070   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0505 21:16:55.017912   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0505 21:16:55.044937   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0505 21:16:55.070973   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0505 21:16:55.097019   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0505 21:16:55.123633   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0505 21:16:55.149575   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0505 21:16:55.175506   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798.pem --> /usr/share/ca-certificates/18798.pem (1338 bytes)
	I0505 21:16:55.205977   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem --> /usr/share/ca-certificates/187982.pem (1708 bytes)
	I0505 21:16:55.235625   29367 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0505 21:16:55.254123   29367 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0505 21:16:55.272371   29367 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0505 21:16:55.290906   29367 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0505 21:16:55.309078   29367 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0505 21:16:55.327422   29367 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0505 21:16:55.344772   29367 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0505 21:16:55.363547   29367 ssh_runner.go:195] Run: openssl version
	I0505 21:16:55.369907   29367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/187982.pem && ln -fs /usr/share/ca-certificates/187982.pem /etc/ssl/certs/187982.pem"
	I0505 21:16:55.381792   29367 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/187982.pem
	I0505 21:16:55.387285   29367 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  5 21:11 /usr/share/ca-certificates/187982.pem
	I0505 21:16:55.387341   29367 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/187982.pem
	I0505 21:16:55.394098   29367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/187982.pem /etc/ssl/certs/3ec20f2e.0"
	I0505 21:16:55.406273   29367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0505 21:16:55.419321   29367 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0505 21:16:55.424714   29367 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  5 20:58 /usr/share/ca-certificates/minikubeCA.pem
	I0505 21:16:55.424778   29367 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0505 21:16:55.431513   29367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0505 21:16:55.443798   29367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18798.pem && ln -fs /usr/share/ca-certificates/18798.pem /etc/ssl/certs/18798.pem"
	I0505 21:16:55.455987   29367 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18798.pem
	I0505 21:16:55.461266   29367 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  5 21:11 /usr/share/ca-certificates/18798.pem
	I0505 21:16:55.461325   29367 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18798.pem
	I0505 21:16:55.467703   29367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18798.pem /etc/ssl/certs/51391683.0"
	I0505 21:16:55.479880   29367 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0505 21:16:55.484693   29367 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0505 21:16:55.484760   29367 kubeadm.go:928] updating node {m02 192.168.39.228 8443 v1.30.0 crio true true} ...
	I0505 21:16:55.484839   29367 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-322980-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.228
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-322980 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0505 21:16:55.484868   29367 kube-vip.go:111] generating kube-vip config ...
	I0505 21:16:55.484902   29367 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0505 21:16:55.502712   29367 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0505 21:16:55.502793   29367 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0505 21:16:55.502840   29367 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0505 21:16:55.515224   29367 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0505 21:16:55.515301   29367 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0505 21:16:55.526926   29367 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0505 21:16:55.526958   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/cache/linux/amd64/v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0505 21:16:55.527026   29367 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18602-11466/.minikube/cache/linux/amd64/v1.30.0/kubelet
	I0505 21:16:55.527061   29367 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18602-11466/.minikube/cache/linux/amd64/v1.30.0/kubeadm
	I0505 21:16:55.527039   29367 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0505 21:16:55.533058   29367 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0505 21:16:55.533090   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/cache/linux/amd64/v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0505 21:17:17.719707   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/cache/linux/amd64/v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0505 21:17:17.719778   29367 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0505 21:17:17.726396   29367 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0505 21:17:17.726428   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/cache/linux/amd64/v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0505 21:17:50.020951   29367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 21:17:50.038891   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/cache/linux/amd64/v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0505 21:17:50.038981   29367 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0505 21:17:50.044160   29367 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0505 21:17:50.044190   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/cache/linux/amd64/v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0505 21:17:50.504506   29367 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0505 21:17:50.515043   29367 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0505 21:17:50.534875   29367 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0505 21:17:50.556682   29367 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0505 21:17:50.577948   29367 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0505 21:17:50.582568   29367 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0505 21:17:50.596820   29367 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 21:17:50.755906   29367 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0505 21:17:50.779139   29367 host.go:66] Checking if "ha-322980" exists ...
	I0505 21:17:50.779615   29367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:17:50.779659   29367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:17:50.795054   29367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37309
	I0505 21:17:50.795670   29367 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:17:50.796336   29367 main.go:141] libmachine: Using API Version  1
	I0505 21:17:50.796369   29367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:17:50.796697   29367 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:17:50.796913   29367 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:17:50.797113   29367 start.go:316] joinCluster: &{Name:ha-322980 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cluster
Name:ha-322980 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.178 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 21:17:50.797222   29367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0505 21:17:50.797244   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:17:50.800376   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:17:50.800800   29367 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:17:50.800824   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:17:50.801026   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:17:50.801179   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:17:50.801321   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:17:50.801444   29367 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa Username:docker}
	I0505 21:17:50.981303   29367 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0505 21:17:50.981352   29367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4gn4z0.x12krlpmiirjw5ha --discovery-token-ca-cert-hash sha256:6a26f18bad1f4f0bdeab00e50a185d34e4c63698b4d623f2ccf5d34207e02541 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-322980-m02 --control-plane --apiserver-advertise-address=192.168.39.228 --apiserver-bind-port=8443"
	I0505 21:18:15.158467   29367 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4gn4z0.x12krlpmiirjw5ha --discovery-token-ca-cert-hash sha256:6a26f18bad1f4f0bdeab00e50a185d34e4c63698b4d623f2ccf5d34207e02541 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-322980-m02 --control-plane --apiserver-advertise-address=192.168.39.228 --apiserver-bind-port=8443": (24.177086804s)
	I0505 21:18:15.158504   29367 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0505 21:18:15.748681   29367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-322980-m02 minikube.k8s.io/updated_at=2024_05_05T21_18_15_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=182cbbc99574885c654f8e32902368a71f76ddd3 minikube.k8s.io/name=ha-322980 minikube.k8s.io/primary=false
	I0505 21:18:15.913052   29367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-322980-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0505 21:18:16.054538   29367 start.go:318] duration metric: took 25.257420448s to joinCluster
	I0505 21:18:16.054611   29367 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0505 21:18:16.056263   29367 out.go:177] * Verifying Kubernetes components...
	I0505 21:18:16.054924   29367 config.go:182] Loaded profile config "ha-322980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 21:18:16.057883   29367 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 21:18:16.308454   29367 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0505 21:18:16.337902   29367 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18602-11466/kubeconfig
	I0505 21:18:16.338272   29367 kapi.go:59] client config for ha-322980: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/client.crt", KeyFile:"/home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/client.key", CAFile:"/home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02a40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0505 21:18:16.338366   29367 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.178:8443
	I0505 21:18:16.338649   29367 node_ready.go:35] waiting up to 6m0s for node "ha-322980-m02" to be "Ready" ...
	I0505 21:18:16.338754   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:16.338767   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:16.338778   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:16.338788   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:16.349870   29367 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0505 21:18:16.839766   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:16.839788   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:16.839798   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:16.839805   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:16.852850   29367 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0505 21:18:17.338946   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:17.338970   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:17.338977   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:17.338980   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:17.343659   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:18:17.838912   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:17.838939   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:17.838947   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:17.838953   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:17.845453   29367 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0505 21:18:18.339337   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:18.339359   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:18.339366   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:18.339369   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:18.342594   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:18:18.343349   29367 node_ready.go:53] node "ha-322980-m02" has status "Ready":"False"
	I0505 21:18:18.839700   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:18.839725   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:18.839735   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:18.839741   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:18.842741   29367 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 21:18:19.339816   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:19.339842   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:19.339852   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:19.339857   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:19.343050   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:18:19.839284   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:19.839309   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:19.839321   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:19.839328   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:19.842392   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:18:20.339523   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:20.339546   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:20.339556   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:20.339561   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:20.415502   29367 round_trippers.go:574] Response Status: 200 OK in 75 milliseconds
	I0505 21:18:20.416443   29367 node_ready.go:53] node "ha-322980-m02" has status "Ready":"False"
	I0505 21:18:20.839860   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:20.839882   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:20.839892   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:20.839897   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:20.843751   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:18:21.338821   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:21.338848   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:21.338857   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:21.338861   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:21.342545   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:18:21.839191   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:21.839209   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:21.839214   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:21.839217   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:21.842470   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:18:22.339079   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:22.339106   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:22.339114   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:22.339119   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:22.343631   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:18:22.839591   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:22.839612   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:22.839618   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:22.839622   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:22.843834   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:18:22.845066   29367 node_ready.go:53] node "ha-322980-m02" has status "Ready":"False"
	I0505 21:18:23.339201   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:23.339227   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:23.339238   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:23.339246   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:23.346734   29367 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0505 21:18:23.839809   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:23.839836   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:23.839847   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:23.839851   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:23.843494   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:18:24.339660   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:24.339680   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:24.339686   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:24.339691   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:24.343340   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:18:24.344220   29367 node_ready.go:49] node "ha-322980-m02" has status "Ready":"True"
	I0505 21:18:24.344241   29367 node_ready.go:38] duration metric: took 8.0055689s for node "ha-322980-m02" to be "Ready" ...
	I0505 21:18:24.344251   29367 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0505 21:18:24.344308   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods
	I0505 21:18:24.344319   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:24.344326   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:24.344329   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:24.349121   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:18:24.355038   29367 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-78zmw" in "kube-system" namespace to be "Ready" ...
	I0505 21:18:24.355104   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-78zmw
	I0505 21:18:24.355110   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:24.355117   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:24.355123   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:24.358283   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:18:24.359260   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980
	I0505 21:18:24.359272   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:24.359278   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:24.359281   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:24.362177   29367 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 21:18:24.362893   29367 pod_ready.go:92] pod "coredns-7db6d8ff4d-78zmw" in "kube-system" namespace has status "Ready":"True"
	I0505 21:18:24.362908   29367 pod_ready.go:81] duration metric: took 7.847121ms for pod "coredns-7db6d8ff4d-78zmw" in "kube-system" namespace to be "Ready" ...
	I0505 21:18:24.362919   29367 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fqt45" in "kube-system" namespace to be "Ready" ...
	I0505 21:18:24.362972   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fqt45
	I0505 21:18:24.362982   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:24.362989   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:24.362994   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:24.365593   29367 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 21:18:24.366298   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980
	I0505 21:18:24.366313   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:24.366323   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:24.366329   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:24.368668   29367 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 21:18:24.369149   29367 pod_ready.go:92] pod "coredns-7db6d8ff4d-fqt45" in "kube-system" namespace has status "Ready":"True"
	I0505 21:18:24.369164   29367 pod_ready.go:81] duration metric: took 6.237663ms for pod "coredns-7db6d8ff4d-fqt45" in "kube-system" namespace to be "Ready" ...
	I0505 21:18:24.369172   29367 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-322980" in "kube-system" namespace to be "Ready" ...
	I0505 21:18:24.369224   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/etcd-ha-322980
	I0505 21:18:24.369235   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:24.369242   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:24.369247   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:24.371543   29367 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 21:18:24.372131   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980
	I0505 21:18:24.372149   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:24.372157   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:24.372162   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:24.375096   29367 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 21:18:24.375588   29367 pod_ready.go:92] pod "etcd-ha-322980" in "kube-system" namespace has status "Ready":"True"
	I0505 21:18:24.375609   29367 pod_ready.go:81] duration metric: took 6.427885ms for pod "etcd-ha-322980" in "kube-system" namespace to be "Ready" ...
	I0505 21:18:24.375620   29367 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-322980-m02" in "kube-system" namespace to be "Ready" ...
	I0505 21:18:24.375672   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/etcd-ha-322980-m02
	I0505 21:18:24.375685   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:24.375695   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:24.375702   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:24.378107   29367 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 21:18:24.378807   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:24.378821   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:24.378829   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:24.378834   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:24.381464   29367 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 21:18:24.876213   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/etcd-ha-322980-m02
	I0505 21:18:24.876235   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:24.876242   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:24.876247   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:24.879744   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:18:24.880247   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:24.880261   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:24.880268   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:24.880272   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:24.883094   29367 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 21:18:24.883798   29367 pod_ready.go:92] pod "etcd-ha-322980-m02" in "kube-system" namespace has status "Ready":"True"
	I0505 21:18:24.883816   29367 pod_ready.go:81] duration metric: took 508.185465ms for pod "etcd-ha-322980-m02" in "kube-system" namespace to be "Ready" ...
	I0505 21:18:24.883830   29367 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-322980" in "kube-system" namespace to be "Ready" ...
	I0505 21:18:24.940083   29367 request.go:629] Waited for 56.203588ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-322980
	I0505 21:18:24.940159   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-322980
	I0505 21:18:24.940167   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:24.940184   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:24.940197   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:24.943603   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:18:25.140694   29367 request.go:629] Waited for 196.376779ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/nodes/ha-322980
	I0505 21:18:25.140751   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980
	I0505 21:18:25.140757   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:25.140764   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:25.140768   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:25.144370   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:18:25.145217   29367 pod_ready.go:92] pod "kube-apiserver-ha-322980" in "kube-system" namespace has status "Ready":"True"
	I0505 21:18:25.145238   29367 pod_ready.go:81] duration metric: took 261.40121ms for pod "kube-apiserver-ha-322980" in "kube-system" namespace to be "Ready" ...
	I0505 21:18:25.145251   29367 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-322980-m02" in "kube-system" namespace to be "Ready" ...
	I0505 21:18:25.340684   29367 request.go:629] Waited for 195.369973ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-322980-m02
	I0505 21:18:25.340755   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-322980-m02
	I0505 21:18:25.340760   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:25.340767   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:25.340778   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:25.344364   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:18:25.540531   29367 request.go:629] Waited for 195.298535ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:25.540580   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:25.540585   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:25.540594   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:25.540599   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:25.544432   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:18:25.545156   29367 pod_ready.go:92] pod "kube-apiserver-ha-322980-m02" in "kube-system" namespace has status "Ready":"True"
	I0505 21:18:25.545174   29367 pod_ready.go:81] duration metric: took 399.915568ms for pod "kube-apiserver-ha-322980-m02" in "kube-system" namespace to be "Ready" ...
	I0505 21:18:25.545190   29367 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-322980" in "kube-system" namespace to be "Ready" ...
	I0505 21:18:25.740306   29367 request.go:629] Waited for 195.054768ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-322980
	I0505 21:18:25.740357   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-322980
	I0505 21:18:25.740362   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:25.740368   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:25.740375   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:25.743743   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:18:25.940062   29367 request.go:629] Waited for 195.368531ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/nodes/ha-322980
	I0505 21:18:25.940115   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980
	I0505 21:18:25.940120   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:25.940128   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:25.940135   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:25.943974   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:18:25.944861   29367 pod_ready.go:92] pod "kube-controller-manager-ha-322980" in "kube-system" namespace has status "Ready":"True"
	I0505 21:18:25.944882   29367 pod_ready.go:81] duration metric: took 399.684428ms for pod "kube-controller-manager-ha-322980" in "kube-system" namespace to be "Ready" ...
	I0505 21:18:25.944894   29367 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-322980-m02" in "kube-system" namespace to be "Ready" ...
	I0505 21:18:26.139961   29367 request.go:629] Waited for 195.008004ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-322980-m02
	I0505 21:18:26.140022   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-322980-m02
	I0505 21:18:26.140027   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:26.140034   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:26.140038   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:26.143851   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:18:26.340135   29367 request.go:629] Waited for 195.377838ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:26.340201   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:26.340209   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:26.340220   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:26.340227   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:26.342958   29367 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 21:18:26.540102   29367 request.go:629] Waited for 94.309581ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-322980-m02
	I0505 21:18:26.540178   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-322980-m02
	I0505 21:18:26.540186   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:26.540203   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:26.540210   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:26.543990   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:18:26.740674   29367 request.go:629] Waited for 195.628445ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:26.740729   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:26.740734   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:26.740741   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:26.740746   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:26.744188   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:18:26.946151   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-322980-m02
	I0505 21:18:26.946177   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:26.946196   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:26.946204   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:26.950287   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:18:27.139810   29367 request.go:629] Waited for 188.283426ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:27.139873   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:27.139879   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:27.139886   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:27.139889   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:27.143491   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:18:27.445558   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-322980-m02
	I0505 21:18:27.445592   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:27.445600   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:27.445604   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:27.449930   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:18:27.539936   29367 request.go:629] Waited for 89.266345ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:27.539990   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:27.539996   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:27.540011   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:27.540025   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:27.543502   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:18:27.945667   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-322980-m02
	I0505 21:18:27.945694   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:27.945705   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:27.945710   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:27.949215   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:18:27.950124   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:27.950140   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:27.950147   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:27.950153   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:27.952800   29367 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 21:18:27.953588   29367 pod_ready.go:102] pod "kube-controller-manager-ha-322980-m02" in "kube-system" namespace has status "Ready":"False"
	I0505 21:18:28.445701   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-322980-m02
	I0505 21:18:28.445720   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:28.445727   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:28.445736   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:28.450099   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:18:28.451102   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:28.451119   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:28.451126   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:28.451131   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:28.454861   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:18:28.945870   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-322980-m02
	I0505 21:18:28.945894   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:28.945904   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:28.945909   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:28.955098   29367 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0505 21:18:28.956275   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:28.956293   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:28.956303   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:28.956309   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:28.959133   29367 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 21:18:28.959720   29367 pod_ready.go:92] pod "kube-controller-manager-ha-322980-m02" in "kube-system" namespace has status "Ready":"True"
	I0505 21:18:28.959743   29367 pod_ready.go:81] duration metric: took 3.014840076s for pod "kube-controller-manager-ha-322980-m02" in "kube-system" namespace to be "Ready" ...
	I0505 21:18:28.959760   29367 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8xdzd" in "kube-system" namespace to be "Ready" ...
	I0505 21:18:28.959811   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8xdzd
	I0505 21:18:28.959818   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:28.959825   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:28.959833   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:28.963439   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:18:29.140603   29367 request.go:629] Waited for 176.36773ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/nodes/ha-322980
	I0505 21:18:29.140701   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980
	I0505 21:18:29.140710   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:29.140723   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:29.140734   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:29.144786   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:18:29.145620   29367 pod_ready.go:92] pod "kube-proxy-8xdzd" in "kube-system" namespace has status "Ready":"True"
	I0505 21:18:29.145645   29367 pod_ready.go:81] duration metric: took 185.874614ms for pod "kube-proxy-8xdzd" in "kube-system" namespace to be "Ready" ...
	I0505 21:18:29.145659   29367 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wbf7q" in "kube-system" namespace to be "Ready" ...
	I0505 21:18:29.340089   29367 request.go:629] Waited for 194.359804ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wbf7q
	I0505 21:18:29.340174   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wbf7q
	I0505 21:18:29.340183   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:29.340215   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:29.340224   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:29.343873   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:18:29.540121   29367 request.go:629] Waited for 195.364212ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:29.540169   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:29.540174   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:29.540181   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:29.540185   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:29.543776   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:18:29.544577   29367 pod_ready.go:92] pod "kube-proxy-wbf7q" in "kube-system" namespace has status "Ready":"True"
	I0505 21:18:29.544597   29367 pod_ready.go:81] duration metric: took 398.928355ms for pod "kube-proxy-wbf7q" in "kube-system" namespace to be "Ready" ...
	I0505 21:18:29.544607   29367 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-322980" in "kube-system" namespace to be "Ready" ...
	I0505 21:18:29.740355   29367 request.go:629] Waited for 195.68113ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-322980
	I0505 21:18:29.740426   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-322980
	I0505 21:18:29.740436   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:29.740443   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:29.740447   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:29.744436   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:18:29.940665   29367 request.go:629] Waited for 195.379071ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/nodes/ha-322980
	I0505 21:18:29.940738   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980
	I0505 21:18:29.940746   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:29.940760   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:29.940765   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:29.944366   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:18:29.945150   29367 pod_ready.go:92] pod "kube-scheduler-ha-322980" in "kube-system" namespace has status "Ready":"True"
	I0505 21:18:29.945174   29367 pod_ready.go:81] duration metric: took 400.560267ms for pod "kube-scheduler-ha-322980" in "kube-system" namespace to be "Ready" ...
	I0505 21:18:29.945184   29367 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-322980-m02" in "kube-system" namespace to be "Ready" ...
	I0505 21:18:30.140364   29367 request.go:629] Waited for 195.10722ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-322980-m02
	I0505 21:18:30.140430   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-322980-m02
	I0505 21:18:30.140439   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:30.140448   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:30.140455   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:30.143967   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:18:30.339885   29367 request.go:629] Waited for 195.326358ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:30.339968   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:18:30.339977   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:30.339985   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:30.339995   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:30.344134   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:18:30.345057   29367 pod_ready.go:92] pod "kube-scheduler-ha-322980-m02" in "kube-system" namespace has status "Ready":"True"
	I0505 21:18:30.345076   29367 pod_ready.go:81] duration metric: took 399.88044ms for pod "kube-scheduler-ha-322980-m02" in "kube-system" namespace to be "Ready" ...
	I0505 21:18:30.345090   29367 pod_ready.go:38] duration metric: took 6.00082807s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0505 21:18:30.345107   29367 api_server.go:52] waiting for apiserver process to appear ...
	I0505 21:18:30.345160   29367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 21:18:30.365232   29367 api_server.go:72] duration metric: took 14.310585824s to wait for apiserver process to appear ...
	I0505 21:18:30.365262   29367 api_server.go:88] waiting for apiserver healthz status ...
	I0505 21:18:30.365284   29367 api_server.go:253] Checking apiserver healthz at https://192.168.39.178:8443/healthz ...
	I0505 21:18:30.372031   29367 api_server.go:279] https://192.168.39.178:8443/healthz returned 200:
	ok
	I0505 21:18:30.372097   29367 round_trippers.go:463] GET https://192.168.39.178:8443/version
	I0505 21:18:30.372102   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:30.372109   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:30.372114   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:30.373309   29367 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 21:18:30.373439   29367 api_server.go:141] control plane version: v1.30.0
	I0505 21:18:30.373465   29367 api_server.go:131] duration metric: took 8.19422ms to wait for apiserver health ...
	I0505 21:18:30.373475   29367 system_pods.go:43] waiting for kube-system pods to appear ...
	I0505 21:18:30.539803   29367 request.go:629] Waited for 166.253744ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods
	I0505 21:18:30.539871   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods
	I0505 21:18:30.539877   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:30.539898   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:30.539919   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:30.548300   29367 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0505 21:18:30.554208   29367 system_pods.go:59] 17 kube-system pods found
	I0505 21:18:30.554242   29367 system_pods.go:61] "coredns-7db6d8ff4d-78zmw" [e066e3ad-0574-44f9-acab-d7cec8b86788] Running
	I0505 21:18:30.554249   29367 system_pods.go:61] "coredns-7db6d8ff4d-fqt45" [27bdadca-f49c-4f50-b09c-07dd6067f39a] Running
	I0505 21:18:30.554253   29367 system_pods.go:61] "etcd-ha-322980" [71d18dfd-9116-470d-bc5b-808adc8d2440] Running
	I0505 21:18:30.554256   29367 system_pods.go:61] "etcd-ha-322980-m02" [d38daa43-9f21-433e-8f83-67d492e7eb6f] Running
	I0505 21:18:30.554259   29367 system_pods.go:61] "kindnet-lmgkm" [78b6e816-d020-4105-b15c-3142323a6627] Running
	I0505 21:18:30.554261   29367 system_pods.go:61] "kindnet-lwtnx" [4033535e-69f1-426c-bb17-831fad6336d5] Running
	I0505 21:18:30.554265   29367 system_pods.go:61] "kube-apiserver-ha-322980" [feaf1c0e-9d36-499a-861f-e92298c928b8] Running
	I0505 21:18:30.554268   29367 system_pods.go:61] "kube-apiserver-ha-322980-m02" [b5d35b3b-53cb-4dc7-9ecd-3cea1362ac0e] Running
	I0505 21:18:30.554272   29367 system_pods.go:61] "kube-controller-manager-ha-322980" [e7bf9f5c-70c2-46a5-bb4d-861a17fd3d64] Running
	I0505 21:18:30.554276   29367 system_pods.go:61] "kube-controller-manager-ha-322980-m02" [7a85d624-043e-4429-afc4-831a59c6f349] Running
	I0505 21:18:30.554281   29367 system_pods.go:61] "kube-proxy-8xdzd" [d0b6492d-c0f5-45dd-8482-c447b81daa66] Running
	I0505 21:18:30.554284   29367 system_pods.go:61] "kube-proxy-wbf7q" [ae43ac77-d16b-4f36-8c27-23d1cf0431e3] Running
	I0505 21:18:30.554286   29367 system_pods.go:61] "kube-scheduler-ha-322980" [b77aa3a9-f911-42e0-9c83-91ae75a0424c] Running
	I0505 21:18:30.554289   29367 system_pods.go:61] "kube-scheduler-ha-322980-m02" [d011a166-d283-4933-a48f-eb36000d04a1] Running
	I0505 21:18:30.554292   29367 system_pods.go:61] "kube-vip-ha-322980" [8743dbcc-49f9-46e8-8088-cd5020429c08] Running
	I0505 21:18:30.554295   29367 system_pods.go:61] "kube-vip-ha-322980-m02" [3cacc089-9d3d-4da6-9bce-0777ffe737d1] Running
	I0505 21:18:30.554298   29367 system_pods.go:61] "storage-provisioner" [bc212ac3-7499-4edc-b5a5-622b0bd4a891] Running
	I0505 21:18:30.554304   29367 system_pods.go:74] duration metric: took 180.821839ms to wait for pod list to return data ...
	I0505 21:18:30.554314   29367 default_sa.go:34] waiting for default service account to be created ...
	I0505 21:18:30.739678   29367 request.go:629] Waited for 185.280789ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/namespaces/default/serviceaccounts
	I0505 21:18:30.739727   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/default/serviceaccounts
	I0505 21:18:30.739731   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:30.739738   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:30.739743   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:30.743560   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:18:30.743780   29367 default_sa.go:45] found service account: "default"
	I0505 21:18:30.743797   29367 default_sa.go:55] duration metric: took 189.476335ms for default service account to be created ...
	I0505 21:18:30.743804   29367 system_pods.go:116] waiting for k8s-apps to be running ...
	I0505 21:18:30.940411   29367 request.go:629] Waited for 196.536289ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods
	I0505 21:18:30.940478   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods
	I0505 21:18:30.940486   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:30.940494   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:30.940500   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:30.947561   29367 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0505 21:18:30.953662   29367 system_pods.go:86] 17 kube-system pods found
	I0505 21:18:30.953685   29367 system_pods.go:89] "coredns-7db6d8ff4d-78zmw" [e066e3ad-0574-44f9-acab-d7cec8b86788] Running
	I0505 21:18:30.953691   29367 system_pods.go:89] "coredns-7db6d8ff4d-fqt45" [27bdadca-f49c-4f50-b09c-07dd6067f39a] Running
	I0505 21:18:30.953697   29367 system_pods.go:89] "etcd-ha-322980" [71d18dfd-9116-470d-bc5b-808adc8d2440] Running
	I0505 21:18:30.953703   29367 system_pods.go:89] "etcd-ha-322980-m02" [d38daa43-9f21-433e-8f83-67d492e7eb6f] Running
	I0505 21:18:30.953709   29367 system_pods.go:89] "kindnet-lmgkm" [78b6e816-d020-4105-b15c-3142323a6627] Running
	I0505 21:18:30.953715   29367 system_pods.go:89] "kindnet-lwtnx" [4033535e-69f1-426c-bb17-831fad6336d5] Running
	I0505 21:18:30.953724   29367 system_pods.go:89] "kube-apiserver-ha-322980" [feaf1c0e-9d36-499a-861f-e92298c928b8] Running
	I0505 21:18:30.953731   29367 system_pods.go:89] "kube-apiserver-ha-322980-m02" [b5d35b3b-53cb-4dc7-9ecd-3cea1362ac0e] Running
	I0505 21:18:30.953741   29367 system_pods.go:89] "kube-controller-manager-ha-322980" [e7bf9f5c-70c2-46a5-bb4d-861a17fd3d64] Running
	I0505 21:18:30.953750   29367 system_pods.go:89] "kube-controller-manager-ha-322980-m02" [7a85d624-043e-4429-afc4-831a59c6f349] Running
	I0505 21:18:30.953755   29367 system_pods.go:89] "kube-proxy-8xdzd" [d0b6492d-c0f5-45dd-8482-c447b81daa66] Running
	I0505 21:18:30.953761   29367 system_pods.go:89] "kube-proxy-wbf7q" [ae43ac77-d16b-4f36-8c27-23d1cf0431e3] Running
	I0505 21:18:30.953765   29367 system_pods.go:89] "kube-scheduler-ha-322980" [b77aa3a9-f911-42e0-9c83-91ae75a0424c] Running
	I0505 21:18:30.953771   29367 system_pods.go:89] "kube-scheduler-ha-322980-m02" [d011a166-d283-4933-a48f-eb36000d04a1] Running
	I0505 21:18:30.953775   29367 system_pods.go:89] "kube-vip-ha-322980" [8743dbcc-49f9-46e8-8088-cd5020429c08] Running
	I0505 21:18:30.953781   29367 system_pods.go:89] "kube-vip-ha-322980-m02" [3cacc089-9d3d-4da6-9bce-0777ffe737d1] Running
	I0505 21:18:30.953784   29367 system_pods.go:89] "storage-provisioner" [bc212ac3-7499-4edc-b5a5-622b0bd4a891] Running
	I0505 21:18:30.953792   29367 system_pods.go:126] duration metric: took 209.983933ms to wait for k8s-apps to be running ...
	I0505 21:18:30.953802   29367 system_svc.go:44] waiting for kubelet service to be running ....
	I0505 21:18:30.953853   29367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 21:18:30.972504   29367 system_svc.go:56] duration metric: took 18.696692ms WaitForService to wait for kubelet
	I0505 21:18:30.972524   29367 kubeadm.go:576] duration metric: took 14.91788416s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0505 21:18:30.972545   29367 node_conditions.go:102] verifying NodePressure condition ...
	I0505 21:18:31.140010   29367 request.go:629] Waited for 167.398505ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/nodes
	I0505 21:18:31.140102   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes
	I0505 21:18:31.140110   29367 round_trippers.go:469] Request Headers:
	I0505 21:18:31.140120   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:18:31.140127   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:18:31.144327   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:18:31.145063   29367 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0505 21:18:31.145087   29367 node_conditions.go:123] node cpu capacity is 2
	I0505 21:18:31.145099   29367 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0505 21:18:31.145103   29367 node_conditions.go:123] node cpu capacity is 2
	I0505 21:18:31.145114   29367 node_conditions.go:105] duration metric: took 172.561353ms to run NodePressure ...
	I0505 21:18:31.145132   29367 start.go:240] waiting for startup goroutines ...
	I0505 21:18:31.145159   29367 start.go:254] writing updated cluster config ...
	I0505 21:18:31.147465   29367 out.go:177] 
	I0505 21:18:31.149170   29367 config.go:182] Loaded profile config "ha-322980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 21:18:31.149261   29367 profile.go:143] Saving config to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/config.json ...
	I0505 21:18:31.151375   29367 out.go:177] * Starting "ha-322980-m03" control-plane node in "ha-322980" cluster
	I0505 21:18:31.152584   29367 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0505 21:18:31.152610   29367 cache.go:56] Caching tarball of preloaded images
	I0505 21:18:31.152705   29367 preload.go:173] Found /home/jenkins/minikube-integration/18602-11466/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0505 21:18:31.152717   29367 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0505 21:18:31.152814   29367 profile.go:143] Saving config to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/config.json ...
	I0505 21:18:31.152975   29367 start.go:360] acquireMachinesLock for ha-322980-m03: {Name:mk7f653ea73351d572a4896c6b37d2e7aa44b4ac Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 21:18:31.153022   29367 start.go:364] duration metric: took 22.512µs to acquireMachinesLock for "ha-322980-m03"
	I0505 21:18:31.153039   29367 start.go:93] Provisioning new machine with config: &{Name:ha-322980 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:ha-322980 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.178 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0505 21:18:31.153130   29367 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0505 21:18:31.154658   29367 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0505 21:18:31.154759   29367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:18:31.154799   29367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:18:31.170539   29367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45151
	I0505 21:18:31.170935   29367 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:18:31.171430   29367 main.go:141] libmachine: Using API Version  1
	I0505 21:18:31.171459   29367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:18:31.171810   29367 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:18:31.172052   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetMachineName
	I0505 21:18:31.172220   29367 main.go:141] libmachine: (ha-322980-m03) Calling .DriverName
	I0505 21:18:31.172411   29367 start.go:159] libmachine.API.Create for "ha-322980" (driver="kvm2")
	I0505 21:18:31.172435   29367 client.go:168] LocalClient.Create starting
	I0505 21:18:31.172472   29367 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem
	I0505 21:18:31.172512   29367 main.go:141] libmachine: Decoding PEM data...
	I0505 21:18:31.172527   29367 main.go:141] libmachine: Parsing certificate...
	I0505 21:18:31.172596   29367 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem
	I0505 21:18:31.172625   29367 main.go:141] libmachine: Decoding PEM data...
	I0505 21:18:31.172643   29367 main.go:141] libmachine: Parsing certificate...
	I0505 21:18:31.172668   29367 main.go:141] libmachine: Running pre-create checks...
	I0505 21:18:31.172679   29367 main.go:141] libmachine: (ha-322980-m03) Calling .PreCreateCheck
	I0505 21:18:31.172846   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetConfigRaw
	I0505 21:18:31.173297   29367 main.go:141] libmachine: Creating machine...
	I0505 21:18:31.173311   29367 main.go:141] libmachine: (ha-322980-m03) Calling .Create
	I0505 21:18:31.173452   29367 main.go:141] libmachine: (ha-322980-m03) Creating KVM machine...
	I0505 21:18:31.174934   29367 main.go:141] libmachine: (ha-322980-m03) DBG | found existing default KVM network
	I0505 21:18:31.175053   29367 main.go:141] libmachine: (ha-322980-m03) DBG | found existing private KVM network mk-ha-322980
	I0505 21:18:31.175208   29367 main.go:141] libmachine: (ha-322980-m03) Setting up store path in /home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m03 ...
	I0505 21:18:31.175237   29367 main.go:141] libmachine: (ha-322980-m03) Building disk image from file:///home/jenkins/minikube-integration/18602-11466/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso
	I0505 21:18:31.175319   29367 main.go:141] libmachine: (ha-322980-m03) DBG | I0505 21:18:31.175188   30843 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18602-11466/.minikube
	I0505 21:18:31.175433   29367 main.go:141] libmachine: (ha-322980-m03) Downloading /home/jenkins/minikube-integration/18602-11466/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18602-11466/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso...
	I0505 21:18:31.410349   29367 main.go:141] libmachine: (ha-322980-m03) DBG | I0505 21:18:31.410225   30843 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m03/id_rsa...
	I0505 21:18:31.506568   29367 main.go:141] libmachine: (ha-322980-m03) DBG | I0505 21:18:31.506471   30843 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m03/ha-322980-m03.rawdisk...
	I0505 21:18:31.506601   29367 main.go:141] libmachine: (ha-322980-m03) DBG | Writing magic tar header
	I0505 21:18:31.506617   29367 main.go:141] libmachine: (ha-322980-m03) DBG | Writing SSH key tar header
	I0505 21:18:31.506634   29367 main.go:141] libmachine: (ha-322980-m03) DBG | I0505 21:18:31.506601   30843 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m03 ...
	I0505 21:18:31.506776   29367 main.go:141] libmachine: (ha-322980-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m03
	I0505 21:18:31.506806   29367 main.go:141] libmachine: (ha-322980-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18602-11466/.minikube/machines
	I0505 21:18:31.506821   29367 main.go:141] libmachine: (ha-322980-m03) Setting executable bit set on /home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m03 (perms=drwx------)
	I0505 21:18:31.506842   29367 main.go:141] libmachine: (ha-322980-m03) Setting executable bit set on /home/jenkins/minikube-integration/18602-11466/.minikube/machines (perms=drwxr-xr-x)
	I0505 21:18:31.506856   29367 main.go:141] libmachine: (ha-322980-m03) Setting executable bit set on /home/jenkins/minikube-integration/18602-11466/.minikube (perms=drwxr-xr-x)
	I0505 21:18:31.506875   29367 main.go:141] libmachine: (ha-322980-m03) Setting executable bit set on /home/jenkins/minikube-integration/18602-11466 (perms=drwxrwxr-x)
	I0505 21:18:31.506889   29367 main.go:141] libmachine: (ha-322980-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18602-11466/.minikube
	I0505 21:18:31.506902   29367 main.go:141] libmachine: (ha-322980-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0505 21:18:31.506918   29367 main.go:141] libmachine: (ha-322980-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0505 21:18:31.506928   29367 main.go:141] libmachine: (ha-322980-m03) Creating domain...
	I0505 21:18:31.506940   29367 main.go:141] libmachine: (ha-322980-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18602-11466
	I0505 21:18:31.506951   29367 main.go:141] libmachine: (ha-322980-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0505 21:18:31.506978   29367 main.go:141] libmachine: (ha-322980-m03) DBG | Checking permissions on dir: /home/jenkins
	I0505 21:18:31.507002   29367 main.go:141] libmachine: (ha-322980-m03) DBG | Checking permissions on dir: /home
	I0505 21:18:31.507013   29367 main.go:141] libmachine: (ha-322980-m03) DBG | Skipping /home - not owner
	I0505 21:18:31.508005   29367 main.go:141] libmachine: (ha-322980-m03) define libvirt domain using xml: 
	I0505 21:18:31.508026   29367 main.go:141] libmachine: (ha-322980-m03) <domain type='kvm'>
	I0505 21:18:31.508037   29367 main.go:141] libmachine: (ha-322980-m03)   <name>ha-322980-m03</name>
	I0505 21:18:31.508049   29367 main.go:141] libmachine: (ha-322980-m03)   <memory unit='MiB'>2200</memory>
	I0505 21:18:31.508058   29367 main.go:141] libmachine: (ha-322980-m03)   <vcpu>2</vcpu>
	I0505 21:18:31.508065   29367 main.go:141] libmachine: (ha-322980-m03)   <features>
	I0505 21:18:31.508072   29367 main.go:141] libmachine: (ha-322980-m03)     <acpi/>
	I0505 21:18:31.508078   29367 main.go:141] libmachine: (ha-322980-m03)     <apic/>
	I0505 21:18:31.508085   29367 main.go:141] libmachine: (ha-322980-m03)     <pae/>
	I0505 21:18:31.508091   29367 main.go:141] libmachine: (ha-322980-m03)     
	I0505 21:18:31.508100   29367 main.go:141] libmachine: (ha-322980-m03)   </features>
	I0505 21:18:31.508109   29367 main.go:141] libmachine: (ha-322980-m03)   <cpu mode='host-passthrough'>
	I0505 21:18:31.508125   29367 main.go:141] libmachine: (ha-322980-m03)   
	I0505 21:18:31.508137   29367 main.go:141] libmachine: (ha-322980-m03)   </cpu>
	I0505 21:18:31.508146   29367 main.go:141] libmachine: (ha-322980-m03)   <os>
	I0505 21:18:31.508157   29367 main.go:141] libmachine: (ha-322980-m03)     <type>hvm</type>
	I0505 21:18:31.508167   29367 main.go:141] libmachine: (ha-322980-m03)     <boot dev='cdrom'/>
	I0505 21:18:31.508190   29367 main.go:141] libmachine: (ha-322980-m03)     <boot dev='hd'/>
	I0505 21:18:31.508203   29367 main.go:141] libmachine: (ha-322980-m03)     <bootmenu enable='no'/>
	I0505 21:18:31.508214   29367 main.go:141] libmachine: (ha-322980-m03)   </os>
	I0505 21:18:31.508226   29367 main.go:141] libmachine: (ha-322980-m03)   <devices>
	I0505 21:18:31.508236   29367 main.go:141] libmachine: (ha-322980-m03)     <disk type='file' device='cdrom'>
	I0505 21:18:31.508254   29367 main.go:141] libmachine: (ha-322980-m03)       <source file='/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m03/boot2docker.iso'/>
	I0505 21:18:31.508270   29367 main.go:141] libmachine: (ha-322980-m03)       <target dev='hdc' bus='scsi'/>
	I0505 21:18:31.508282   29367 main.go:141] libmachine: (ha-322980-m03)       <readonly/>
	I0505 21:18:31.508293   29367 main.go:141] libmachine: (ha-322980-m03)     </disk>
	I0505 21:18:31.508304   29367 main.go:141] libmachine: (ha-322980-m03)     <disk type='file' device='disk'>
	I0505 21:18:31.508319   29367 main.go:141] libmachine: (ha-322980-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0505 21:18:31.508336   29367 main.go:141] libmachine: (ha-322980-m03)       <source file='/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m03/ha-322980-m03.rawdisk'/>
	I0505 21:18:31.508352   29367 main.go:141] libmachine: (ha-322980-m03)       <target dev='hda' bus='virtio'/>
	I0505 21:18:31.508380   29367 main.go:141] libmachine: (ha-322980-m03)     </disk>
	I0505 21:18:31.508400   29367 main.go:141] libmachine: (ha-322980-m03)     <interface type='network'>
	I0505 21:18:31.508411   29367 main.go:141] libmachine: (ha-322980-m03)       <source network='mk-ha-322980'/>
	I0505 21:18:31.508423   29367 main.go:141] libmachine: (ha-322980-m03)       <model type='virtio'/>
	I0505 21:18:31.508431   29367 main.go:141] libmachine: (ha-322980-m03)     </interface>
	I0505 21:18:31.508442   29367 main.go:141] libmachine: (ha-322980-m03)     <interface type='network'>
	I0505 21:18:31.508453   29367 main.go:141] libmachine: (ha-322980-m03)       <source network='default'/>
	I0505 21:18:31.508464   29367 main.go:141] libmachine: (ha-322980-m03)       <model type='virtio'/>
	I0505 21:18:31.508483   29367 main.go:141] libmachine: (ha-322980-m03)     </interface>
	I0505 21:18:31.508499   29367 main.go:141] libmachine: (ha-322980-m03)     <serial type='pty'>
	I0505 21:18:31.508511   29367 main.go:141] libmachine: (ha-322980-m03)       <target port='0'/>
	I0505 21:18:31.508522   29367 main.go:141] libmachine: (ha-322980-m03)     </serial>
	I0505 21:18:31.508534   29367 main.go:141] libmachine: (ha-322980-m03)     <console type='pty'>
	I0505 21:18:31.508545   29367 main.go:141] libmachine: (ha-322980-m03)       <target type='serial' port='0'/>
	I0505 21:18:31.508557   29367 main.go:141] libmachine: (ha-322980-m03)     </console>
	I0505 21:18:31.508567   29367 main.go:141] libmachine: (ha-322980-m03)     <rng model='virtio'>
	I0505 21:18:31.508601   29367 main.go:141] libmachine: (ha-322980-m03)       <backend model='random'>/dev/random</backend>
	I0505 21:18:31.508626   29367 main.go:141] libmachine: (ha-322980-m03)     </rng>
	I0505 21:18:31.508636   29367 main.go:141] libmachine: (ha-322980-m03)     
	I0505 21:18:31.508657   29367 main.go:141] libmachine: (ha-322980-m03)     
	I0505 21:18:31.508674   29367 main.go:141] libmachine: (ha-322980-m03)   </devices>
	I0505 21:18:31.508689   29367 main.go:141] libmachine: (ha-322980-m03) </domain>
	I0505 21:18:31.508698   29367 main.go:141] libmachine: (ha-322980-m03) 
	I0505 21:18:31.515278   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:90:b2:60 in network default
	I0505 21:18:31.515919   29367 main.go:141] libmachine: (ha-322980-m03) Ensuring networks are active...
	I0505 21:18:31.515941   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:31.516616   29367 main.go:141] libmachine: (ha-322980-m03) Ensuring network default is active
	I0505 21:18:31.517069   29367 main.go:141] libmachine: (ha-322980-m03) Ensuring network mk-ha-322980 is active
	I0505 21:18:31.517420   29367 main.go:141] libmachine: (ha-322980-m03) Getting domain xml...
	I0505 21:18:31.518170   29367 main.go:141] libmachine: (ha-322980-m03) Creating domain...
	I0505 21:18:32.728189   29367 main.go:141] libmachine: (ha-322980-m03) Waiting to get IP...
	I0505 21:18:32.729118   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:32.729602   29367 main.go:141] libmachine: (ha-322980-m03) DBG | unable to find current IP address of domain ha-322980-m03 in network mk-ha-322980
	I0505 21:18:32.729631   29367 main.go:141] libmachine: (ha-322980-m03) DBG | I0505 21:18:32.729550   30843 retry.go:31] will retry after 199.252104ms: waiting for machine to come up
	I0505 21:18:32.930028   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:32.930485   29367 main.go:141] libmachine: (ha-322980-m03) DBG | unable to find current IP address of domain ha-322980-m03 in network mk-ha-322980
	I0505 21:18:32.930513   29367 main.go:141] libmachine: (ha-322980-m03) DBG | I0505 21:18:32.930436   30843 retry.go:31] will retry after 253.528343ms: waiting for machine to come up
	I0505 21:18:33.185827   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:33.186234   29367 main.go:141] libmachine: (ha-322980-m03) DBG | unable to find current IP address of domain ha-322980-m03 in network mk-ha-322980
	I0505 21:18:33.186256   29367 main.go:141] libmachine: (ha-322980-m03) DBG | I0505 21:18:33.186211   30843 retry.go:31] will retry after 453.653869ms: waiting for machine to come up
	I0505 21:18:33.641714   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:33.642075   29367 main.go:141] libmachine: (ha-322980-m03) DBG | unable to find current IP address of domain ha-322980-m03 in network mk-ha-322980
	I0505 21:18:33.642101   29367 main.go:141] libmachine: (ha-322980-m03) DBG | I0505 21:18:33.642031   30843 retry.go:31] will retry after 423.63847ms: waiting for machine to come up
	I0505 21:18:34.067574   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:34.068005   29367 main.go:141] libmachine: (ha-322980-m03) DBG | unable to find current IP address of domain ha-322980-m03 in network mk-ha-322980
	I0505 21:18:34.068030   29367 main.go:141] libmachine: (ha-322980-m03) DBG | I0505 21:18:34.067963   30843 retry.go:31] will retry after 707.190206ms: waiting for machine to come up
	I0505 21:18:34.776598   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:34.777113   29367 main.go:141] libmachine: (ha-322980-m03) DBG | unable to find current IP address of domain ha-322980-m03 in network mk-ha-322980
	I0505 21:18:34.777137   29367 main.go:141] libmachine: (ha-322980-m03) DBG | I0505 21:18:34.777051   30843 retry.go:31] will retry after 823.896849ms: waiting for machine to come up
	I0505 21:18:35.603014   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:35.603418   29367 main.go:141] libmachine: (ha-322980-m03) DBG | unable to find current IP address of domain ha-322980-m03 in network mk-ha-322980
	I0505 21:18:35.603443   29367 main.go:141] libmachine: (ha-322980-m03) DBG | I0505 21:18:35.603372   30843 retry.go:31] will retry after 1.150013486s: waiting for machine to come up
	I0505 21:18:36.755487   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:36.755968   29367 main.go:141] libmachine: (ha-322980-m03) DBG | unable to find current IP address of domain ha-322980-m03 in network mk-ha-322980
	I0505 21:18:36.756006   29367 main.go:141] libmachine: (ha-322980-m03) DBG | I0505 21:18:36.755960   30843 retry.go:31] will retry after 1.125565148s: waiting for machine to come up
	I0505 21:18:37.882632   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:37.882961   29367 main.go:141] libmachine: (ha-322980-m03) DBG | unable to find current IP address of domain ha-322980-m03 in network mk-ha-322980
	I0505 21:18:37.882990   29367 main.go:141] libmachine: (ha-322980-m03) DBG | I0505 21:18:37.882924   30843 retry.go:31] will retry after 1.186554631s: waiting for machine to come up
	I0505 21:18:39.070675   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:39.071010   29367 main.go:141] libmachine: (ha-322980-m03) DBG | unable to find current IP address of domain ha-322980-m03 in network mk-ha-322980
	I0505 21:18:39.071034   29367 main.go:141] libmachine: (ha-322980-m03) DBG | I0505 21:18:39.070949   30843 retry.go:31] will retry after 2.150680496s: waiting for machine to come up
	I0505 21:18:41.223031   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:41.223557   29367 main.go:141] libmachine: (ha-322980-m03) DBG | unable to find current IP address of domain ha-322980-m03 in network mk-ha-322980
	I0505 21:18:41.223592   29367 main.go:141] libmachine: (ha-322980-m03) DBG | I0505 21:18:41.223476   30843 retry.go:31] will retry after 2.688830385s: waiting for machine to come up
	I0505 21:18:43.913880   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:43.914296   29367 main.go:141] libmachine: (ha-322980-m03) DBG | unable to find current IP address of domain ha-322980-m03 in network mk-ha-322980
	I0505 21:18:43.914317   29367 main.go:141] libmachine: (ha-322980-m03) DBG | I0505 21:18:43.914267   30843 retry.go:31] will retry after 2.277627535s: waiting for machine to come up
	I0505 21:18:46.193457   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:46.193888   29367 main.go:141] libmachine: (ha-322980-m03) DBG | unable to find current IP address of domain ha-322980-m03 in network mk-ha-322980
	I0505 21:18:46.193919   29367 main.go:141] libmachine: (ha-322980-m03) DBG | I0505 21:18:46.193839   30843 retry.go:31] will retry after 3.873768109s: waiting for machine to come up
	I0505 21:18:50.068786   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:50.069219   29367 main.go:141] libmachine: (ha-322980-m03) DBG | unable to find current IP address of domain ha-322980-m03 in network mk-ha-322980
	I0505 21:18:50.069249   29367 main.go:141] libmachine: (ha-322980-m03) DBG | I0505 21:18:50.069169   30843 retry.go:31] will retry after 4.135874367s: waiting for machine to come up
	I0505 21:18:54.208167   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:54.208555   29367 main.go:141] libmachine: (ha-322980-m03) Found IP for machine: 192.168.39.29
	I0505 21:18:54.208571   29367 main.go:141] libmachine: (ha-322980-m03) Reserving static IP address...
	I0505 21:18:54.208584   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has current primary IP address 192.168.39.29 and MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:54.208947   29367 main.go:141] libmachine: (ha-322980-m03) DBG | unable to find host DHCP lease matching {name: "ha-322980-m03", mac: "52:54:00:c6:64:b7", ip: "192.168.39.29"} in network mk-ha-322980
	I0505 21:18:54.279929   29367 main.go:141] libmachine: (ha-322980-m03) Reserved static IP address: 192.168.39.29
	I0505 21:18:54.279960   29367 main.go:141] libmachine: (ha-322980-m03) Waiting for SSH to be available...
	I0505 21:18:54.279971   29367 main.go:141] libmachine: (ha-322980-m03) DBG | Getting to WaitForSSH function...
	I0505 21:18:54.282838   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:54.283259   29367 main.go:141] libmachine: (ha-322980-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:c6:64:b7", ip: ""} in network mk-ha-322980
	I0505 21:18:54.283287   29367 main.go:141] libmachine: (ha-322980-m03) DBG | unable to find defined IP address of network mk-ha-322980 interface with MAC address 52:54:00:c6:64:b7
	I0505 21:18:54.283437   29367 main.go:141] libmachine: (ha-322980-m03) DBG | Using SSH client type: external
	I0505 21:18:54.283466   29367 main.go:141] libmachine: (ha-322980-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m03/id_rsa (-rw-------)
	I0505 21:18:54.283507   29367 main.go:141] libmachine: (ha-322980-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0505 21:18:54.283527   29367 main.go:141] libmachine: (ha-322980-m03) DBG | About to run SSH command:
	I0505 21:18:54.283545   29367 main.go:141] libmachine: (ha-322980-m03) DBG | exit 0
	I0505 21:18:54.287074   29367 main.go:141] libmachine: (ha-322980-m03) DBG | SSH cmd err, output: exit status 255: 
	I0505 21:18:54.287098   29367 main.go:141] libmachine: (ha-322980-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0505 21:18:54.287108   29367 main.go:141] libmachine: (ha-322980-m03) DBG | command : exit 0
	I0505 21:18:54.287113   29367 main.go:141] libmachine: (ha-322980-m03) DBG | err     : exit status 255
	I0505 21:18:54.287121   29367 main.go:141] libmachine: (ha-322980-m03) DBG | output  : 
	I0505 21:18:57.287660   29367 main.go:141] libmachine: (ha-322980-m03) DBG | Getting to WaitForSSH function...
	I0505 21:18:57.290086   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:57.290564   29367 main.go:141] libmachine: (ha-322980-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:64:b7", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:18:47 +0000 UTC Type:0 Mac:52:54:00:c6:64:b7 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ha-322980-m03 Clientid:01:52:54:00:c6:64:b7}
	I0505 21:18:57.290589   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined IP address 192.168.39.29 and MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:57.290738   29367 main.go:141] libmachine: (ha-322980-m03) DBG | Using SSH client type: external
	I0505 21:18:57.290759   29367 main.go:141] libmachine: (ha-322980-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m03/id_rsa (-rw-------)
	I0505 21:18:57.290813   29367 main.go:141] libmachine: (ha-322980-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.29 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0505 21:18:57.290839   29367 main.go:141] libmachine: (ha-322980-m03) DBG | About to run SSH command:
	I0505 21:18:57.290853   29367 main.go:141] libmachine: (ha-322980-m03) DBG | exit 0
	I0505 21:18:57.419820   29367 main.go:141] libmachine: (ha-322980-m03) DBG | SSH cmd err, output: <nil>: 
	I0505 21:18:57.420178   29367 main.go:141] libmachine: (ha-322980-m03) KVM machine creation complete!
	I0505 21:18:57.420458   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetConfigRaw
	I0505 21:18:57.420935   29367 main.go:141] libmachine: (ha-322980-m03) Calling .DriverName
	I0505 21:18:57.421107   29367 main.go:141] libmachine: (ha-322980-m03) Calling .DriverName
	I0505 21:18:57.421278   29367 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0505 21:18:57.421296   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetState
	I0505 21:18:57.422618   29367 main.go:141] libmachine: Detecting operating system of created instance...
	I0505 21:18:57.422637   29367 main.go:141] libmachine: Waiting for SSH to be available...
	I0505 21:18:57.422645   29367 main.go:141] libmachine: Getting to WaitForSSH function...
	I0505 21:18:57.422654   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHHostname
	I0505 21:18:57.424963   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:57.425355   29367 main.go:141] libmachine: (ha-322980-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:64:b7", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:18:47 +0000 UTC Type:0 Mac:52:54:00:c6:64:b7 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ha-322980-m03 Clientid:01:52:54:00:c6:64:b7}
	I0505 21:18:57.425382   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined IP address 192.168.39.29 and MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:57.425504   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHPort
	I0505 21:18:57.425653   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHKeyPath
	I0505 21:18:57.425798   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHKeyPath
	I0505 21:18:57.425929   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHUsername
	I0505 21:18:57.426085   29367 main.go:141] libmachine: Using SSH client type: native
	I0505 21:18:57.426328   29367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0505 21:18:57.426340   29367 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0505 21:18:57.535116   29367 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0505 21:18:57.535143   29367 main.go:141] libmachine: Detecting the provisioner...
	I0505 21:18:57.535155   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHHostname
	I0505 21:18:57.538912   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:57.539571   29367 main.go:141] libmachine: (ha-322980-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:64:b7", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:18:47 +0000 UTC Type:0 Mac:52:54:00:c6:64:b7 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ha-322980-m03 Clientid:01:52:54:00:c6:64:b7}
	I0505 21:18:57.539600   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined IP address 192.168.39.29 and MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:57.539793   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHPort
	I0505 21:18:57.540003   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHKeyPath
	I0505 21:18:57.540177   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHKeyPath
	I0505 21:18:57.540355   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHUsername
	I0505 21:18:57.540524   29367 main.go:141] libmachine: Using SSH client type: native
	I0505 21:18:57.540674   29367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0505 21:18:57.540684   29367 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0505 21:18:57.648740   29367 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0505 21:18:57.648802   29367 main.go:141] libmachine: found compatible host: buildroot
	I0505 21:18:57.648809   29367 main.go:141] libmachine: Provisioning with buildroot...
	I0505 21:18:57.648816   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetMachineName
	I0505 21:18:57.649084   29367 buildroot.go:166] provisioning hostname "ha-322980-m03"
	I0505 21:18:57.649112   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetMachineName
	I0505 21:18:57.649306   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHHostname
	I0505 21:18:57.652050   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:57.652395   29367 main.go:141] libmachine: (ha-322980-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:64:b7", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:18:47 +0000 UTC Type:0 Mac:52:54:00:c6:64:b7 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ha-322980-m03 Clientid:01:52:54:00:c6:64:b7}
	I0505 21:18:57.652423   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined IP address 192.168.39.29 and MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:57.652551   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHPort
	I0505 21:18:57.652717   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHKeyPath
	I0505 21:18:57.652856   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHKeyPath
	I0505 21:18:57.653045   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHUsername
	I0505 21:18:57.653216   29367 main.go:141] libmachine: Using SSH client type: native
	I0505 21:18:57.653393   29367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0505 21:18:57.653409   29367 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-322980-m03 && echo "ha-322980-m03" | sudo tee /etc/hostname
	I0505 21:18:57.780562   29367 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-322980-m03
	
	I0505 21:18:57.780594   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHHostname
	I0505 21:18:57.783541   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:57.783958   29367 main.go:141] libmachine: (ha-322980-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:64:b7", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:18:47 +0000 UTC Type:0 Mac:52:54:00:c6:64:b7 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ha-322980-m03 Clientid:01:52:54:00:c6:64:b7}
	I0505 21:18:57.783991   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined IP address 192.168.39.29 and MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:57.784191   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHPort
	I0505 21:18:57.784384   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHKeyPath
	I0505 21:18:57.784613   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHKeyPath
	I0505 21:18:57.784801   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHUsername
	I0505 21:18:57.784986   29367 main.go:141] libmachine: Using SSH client type: native
	I0505 21:18:57.785186   29367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0505 21:18:57.785218   29367 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-322980-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-322980-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-322980-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0505 21:18:57.906398   29367 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0505 21:18:57.906433   29367 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18602-11466/.minikube CaCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18602-11466/.minikube}
	I0505 21:18:57.906455   29367 buildroot.go:174] setting up certificates
	I0505 21:18:57.906469   29367 provision.go:84] configureAuth start
	I0505 21:18:57.906485   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetMachineName
	I0505 21:18:57.906749   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetIP
	I0505 21:18:57.909266   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:57.909659   29367 main.go:141] libmachine: (ha-322980-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:64:b7", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:18:47 +0000 UTC Type:0 Mac:52:54:00:c6:64:b7 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ha-322980-m03 Clientid:01:52:54:00:c6:64:b7}
	I0505 21:18:57.909690   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined IP address 192.168.39.29 and MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:57.909837   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHHostname
	I0505 21:18:57.911619   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:57.911964   29367 main.go:141] libmachine: (ha-322980-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:64:b7", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:18:47 +0000 UTC Type:0 Mac:52:54:00:c6:64:b7 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ha-322980-m03 Clientid:01:52:54:00:c6:64:b7}
	I0505 21:18:57.911990   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined IP address 192.168.39.29 and MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:57.912105   29367 provision.go:143] copyHostCerts
	I0505 21:18:57.912136   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem
	I0505 21:18:57.912173   29367 exec_runner.go:144] found /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem, removing ...
	I0505 21:18:57.912186   29367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem
	I0505 21:18:57.912292   29367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem (1078 bytes)
	I0505 21:18:57.912394   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem
	I0505 21:18:57.912420   29367 exec_runner.go:144] found /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem, removing ...
	I0505 21:18:57.912425   29367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem
	I0505 21:18:57.912463   29367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem (1123 bytes)
	I0505 21:18:57.912525   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem
	I0505 21:18:57.912548   29367 exec_runner.go:144] found /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem, removing ...
	I0505 21:18:57.912557   29367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem
	I0505 21:18:57.912592   29367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem (1675 bytes)
	I0505 21:18:57.912655   29367 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem org=jenkins.ha-322980-m03 san=[127.0.0.1 192.168.39.29 ha-322980-m03 localhost minikube]
	I0505 21:18:58.060988   29367 provision.go:177] copyRemoteCerts
	I0505 21:18:58.061038   29367 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0505 21:18:58.061059   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHHostname
	I0505 21:18:58.063811   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:58.064265   29367 main.go:141] libmachine: (ha-322980-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:64:b7", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:18:47 +0000 UTC Type:0 Mac:52:54:00:c6:64:b7 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ha-322980-m03 Clientid:01:52:54:00:c6:64:b7}
	I0505 21:18:58.064295   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined IP address 192.168.39.29 and MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:58.064465   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHPort
	I0505 21:18:58.064638   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHKeyPath
	I0505 21:18:58.064770   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHUsername
	I0505 21:18:58.064871   29367 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m03/id_rsa Username:docker}
	I0505 21:18:58.150293   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0505 21:18:58.150356   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0505 21:18:58.179798   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0505 21:18:58.179861   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0505 21:18:58.207727   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0505 21:18:58.207795   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0505 21:18:58.237652   29367 provision.go:87] duration metric: took 331.170378ms to configureAuth
	I0505 21:18:58.237680   29367 buildroot.go:189] setting minikube options for container-runtime
	I0505 21:18:58.237923   29367 config.go:182] Loaded profile config "ha-322980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 21:18:58.238003   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHHostname
	I0505 21:18:58.240687   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:58.241062   29367 main.go:141] libmachine: (ha-322980-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:64:b7", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:18:47 +0000 UTC Type:0 Mac:52:54:00:c6:64:b7 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ha-322980-m03 Clientid:01:52:54:00:c6:64:b7}
	I0505 21:18:58.241103   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined IP address 192.168.39.29 and MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:58.241279   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHPort
	I0505 21:18:58.241439   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHKeyPath
	I0505 21:18:58.241595   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHKeyPath
	I0505 21:18:58.241715   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHUsername
	I0505 21:18:58.241856   29367 main.go:141] libmachine: Using SSH client type: native
	I0505 21:18:58.242007   29367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0505 21:18:58.242022   29367 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0505 21:18:58.541225   29367 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0505 21:18:58.541253   29367 main.go:141] libmachine: Checking connection to Docker...
	I0505 21:18:58.541263   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetURL
	I0505 21:18:58.542725   29367 main.go:141] libmachine: (ha-322980-m03) DBG | Using libvirt version 6000000
	I0505 21:18:58.545160   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:58.545564   29367 main.go:141] libmachine: (ha-322980-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:64:b7", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:18:47 +0000 UTC Type:0 Mac:52:54:00:c6:64:b7 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ha-322980-m03 Clientid:01:52:54:00:c6:64:b7}
	I0505 21:18:58.545597   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined IP address 192.168.39.29 and MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:58.545773   29367 main.go:141] libmachine: Docker is up and running!
	I0505 21:18:58.545789   29367 main.go:141] libmachine: Reticulating splines...
	I0505 21:18:58.545797   29367 client.go:171] duration metric: took 27.373355272s to LocalClient.Create
	I0505 21:18:58.545824   29367 start.go:167] duration metric: took 27.373413959s to libmachine.API.Create "ha-322980"
	I0505 21:18:58.545836   29367 start.go:293] postStartSetup for "ha-322980-m03" (driver="kvm2")
	I0505 21:18:58.545851   29367 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0505 21:18:58.545874   29367 main.go:141] libmachine: (ha-322980-m03) Calling .DriverName
	I0505 21:18:58.546118   29367 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0505 21:18:58.546146   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHHostname
	I0505 21:18:58.548424   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:58.548850   29367 main.go:141] libmachine: (ha-322980-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:64:b7", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:18:47 +0000 UTC Type:0 Mac:52:54:00:c6:64:b7 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ha-322980-m03 Clientid:01:52:54:00:c6:64:b7}
	I0505 21:18:58.548880   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined IP address 192.168.39.29 and MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:58.548996   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHPort
	I0505 21:18:58.549168   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHKeyPath
	I0505 21:18:58.549342   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHUsername
	I0505 21:18:58.549511   29367 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m03/id_rsa Username:docker}
	I0505 21:18:58.635360   29367 ssh_runner.go:195] Run: cat /etc/os-release
	I0505 21:18:58.640495   29367 info.go:137] Remote host: Buildroot 2023.02.9
	I0505 21:18:58.640520   29367 filesync.go:126] Scanning /home/jenkins/minikube-integration/18602-11466/.minikube/addons for local assets ...
	I0505 21:18:58.640586   29367 filesync.go:126] Scanning /home/jenkins/minikube-integration/18602-11466/.minikube/files for local assets ...
	I0505 21:18:58.640675   29367 filesync.go:149] local asset: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem -> 187982.pem in /etc/ssl/certs
	I0505 21:18:58.640686   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem -> /etc/ssl/certs/187982.pem
	I0505 21:18:58.640790   29367 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0505 21:18:58.650860   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem --> /etc/ssl/certs/187982.pem (1708 bytes)
	I0505 21:18:58.678725   29367 start.go:296] duration metric: took 132.877481ms for postStartSetup
	I0505 21:18:58.678770   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetConfigRaw
	I0505 21:18:58.679495   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetIP
	I0505 21:18:58.682278   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:58.682582   29367 main.go:141] libmachine: (ha-322980-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:64:b7", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:18:47 +0000 UTC Type:0 Mac:52:54:00:c6:64:b7 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ha-322980-m03 Clientid:01:52:54:00:c6:64:b7}
	I0505 21:18:58.682607   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined IP address 192.168.39.29 and MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:58.682828   29367 profile.go:143] Saving config to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/config.json ...
	I0505 21:18:58.682993   29367 start.go:128] duration metric: took 27.529851966s to createHost
	I0505 21:18:58.683015   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHHostname
	I0505 21:18:58.685049   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:58.685436   29367 main.go:141] libmachine: (ha-322980-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:64:b7", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:18:47 +0000 UTC Type:0 Mac:52:54:00:c6:64:b7 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ha-322980-m03 Clientid:01:52:54:00:c6:64:b7}
	I0505 21:18:58.685465   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined IP address 192.168.39.29 and MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:58.685590   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHPort
	I0505 21:18:58.685769   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHKeyPath
	I0505 21:18:58.685932   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHKeyPath
	I0505 21:18:58.686098   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHUsername
	I0505 21:18:58.686238   29367 main.go:141] libmachine: Using SSH client type: native
	I0505 21:18:58.686386   29367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0505 21:18:58.686397   29367 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0505 21:18:58.796631   29367 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714943938.783062826
	
	I0505 21:18:58.796654   29367 fix.go:216] guest clock: 1714943938.783062826
	I0505 21:18:58.796663   29367 fix.go:229] Guest: 2024-05-05 21:18:58.783062826 +0000 UTC Remote: 2024-05-05 21:18:58.683005861 +0000 UTC m=+210.545765441 (delta=100.056965ms)
	I0505 21:18:58.796683   29367 fix.go:200] guest clock delta is within tolerance: 100.056965ms
	I0505 21:18:58.796693   29367 start.go:83] releasing machines lock for "ha-322980-m03", held for 27.643657327s
	I0505 21:18:58.796716   29367 main.go:141] libmachine: (ha-322980-m03) Calling .DriverName
	I0505 21:18:58.796972   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetIP
	I0505 21:18:58.799515   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:58.799874   29367 main.go:141] libmachine: (ha-322980-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:64:b7", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:18:47 +0000 UTC Type:0 Mac:52:54:00:c6:64:b7 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ha-322980-m03 Clientid:01:52:54:00:c6:64:b7}
	I0505 21:18:58.799900   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined IP address 192.168.39.29 and MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:58.802093   29367 out.go:177] * Found network options:
	I0505 21:18:58.803610   29367 out.go:177]   - NO_PROXY=192.168.39.178,192.168.39.228
	W0505 21:18:58.804940   29367 proxy.go:119] fail to check proxy env: Error ip not in block
	W0505 21:18:58.804962   29367 proxy.go:119] fail to check proxy env: Error ip not in block
	I0505 21:18:58.804977   29367 main.go:141] libmachine: (ha-322980-m03) Calling .DriverName
	I0505 21:18:58.805551   29367 main.go:141] libmachine: (ha-322980-m03) Calling .DriverName
	I0505 21:18:58.805782   29367 main.go:141] libmachine: (ha-322980-m03) Calling .DriverName
	I0505 21:18:58.805876   29367 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0505 21:18:58.805915   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHHostname
	W0505 21:18:58.805979   29367 proxy.go:119] fail to check proxy env: Error ip not in block
	W0505 21:18:58.806003   29367 proxy.go:119] fail to check proxy env: Error ip not in block
	I0505 21:18:58.806068   29367 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0505 21:18:58.806089   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHHostname
	I0505 21:18:58.808854   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:58.809186   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:58.809452   29367 main.go:141] libmachine: (ha-322980-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:64:b7", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:18:47 +0000 UTC Type:0 Mac:52:54:00:c6:64:b7 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ha-322980-m03 Clientid:01:52:54:00:c6:64:b7}
	I0505 21:18:58.809483   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined IP address 192.168.39.29 and MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:58.809630   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHPort
	I0505 21:18:58.809757   29367 main.go:141] libmachine: (ha-322980-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:64:b7", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:18:47 +0000 UTC Type:0 Mac:52:54:00:c6:64:b7 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ha-322980-m03 Clientid:01:52:54:00:c6:64:b7}
	I0505 21:18:58.809786   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined IP address 192.168.39.29 and MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:18:58.809791   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHKeyPath
	I0505 21:18:58.809969   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHUsername
	I0505 21:18:58.810009   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHPort
	I0505 21:18:58.810174   29367 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m03/id_rsa Username:docker}
	I0505 21:18:58.810227   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHKeyPath
	I0505 21:18:58.810370   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHUsername
	I0505 21:18:58.810498   29367 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m03/id_rsa Username:docker}
	I0505 21:18:59.055917   29367 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0505 21:18:59.063181   29367 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0505 21:18:59.063258   29367 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0505 21:18:59.082060   29367 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0505 21:18:59.082081   29367 start.go:494] detecting cgroup driver to use...
	I0505 21:18:59.082143   29367 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0505 21:18:59.102490   29367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0505 21:18:59.118744   29367 docker.go:217] disabling cri-docker service (if available) ...
	I0505 21:18:59.118798   29367 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0505 21:18:59.135687   29367 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0505 21:18:59.161082   29367 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0505 21:18:59.284170   29367 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0505 21:18:59.430037   29367 docker.go:233] disabling docker service ...
	I0505 21:18:59.430096   29367 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0505 21:18:59.445892   29367 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0505 21:18:59.459691   29367 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0505 21:18:59.612769   29367 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0505 21:18:59.773670   29367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0505 21:18:59.789087   29367 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0505 21:18:59.809428   29367 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0505 21:18:59.809496   29367 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:18:59.821422   29367 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0505 21:18:59.821488   29367 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:18:59.833237   29367 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:18:59.845606   29367 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:18:59.857286   29367 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0505 21:18:59.870600   29367 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:18:59.883365   29367 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:18:59.902940   29367 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:18:59.915118   29367 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0505 21:18:59.925710   29367 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0505 21:18:59.925762   29367 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0505 21:18:59.940381   29367 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0505 21:18:59.950882   29367 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 21:19:00.096868   29367 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0505 21:19:00.252619   29367 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0505 21:19:00.252698   29367 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0505 21:19:00.258481   29367 start.go:562] Will wait 60s for crictl version
	I0505 21:19:00.258543   29367 ssh_runner.go:195] Run: which crictl
	I0505 21:19:00.263197   29367 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0505 21:19:00.311270   29367 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0505 21:19:00.311361   29367 ssh_runner.go:195] Run: crio --version
	I0505 21:19:00.344287   29367 ssh_runner.go:195] Run: crio --version
	I0505 21:19:00.379161   29367 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0505 21:19:00.380590   29367 out.go:177]   - env NO_PROXY=192.168.39.178
	I0505 21:19:00.382104   29367 out.go:177]   - env NO_PROXY=192.168.39.178,192.168.39.228
	I0505 21:19:00.383357   29367 main.go:141] libmachine: (ha-322980-m03) Calling .GetIP
	I0505 21:19:00.386321   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:19:00.386717   29367 main.go:141] libmachine: (ha-322980-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:64:b7", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:18:47 +0000 UTC Type:0 Mac:52:54:00:c6:64:b7 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ha-322980-m03 Clientid:01:52:54:00:c6:64:b7}
	I0505 21:19:00.386750   29367 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined IP address 192.168.39.29 and MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:19:00.386980   29367 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0505 21:19:00.392694   29367 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0505 21:19:00.408501   29367 mustload.go:65] Loading cluster: ha-322980
	I0505 21:19:00.408768   29367 config.go:182] Loaded profile config "ha-322980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 21:19:00.409091   29367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:19:00.409140   29367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:19:00.425690   29367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34965
	I0505 21:19:00.426132   29367 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:19:00.426599   29367 main.go:141] libmachine: Using API Version  1
	I0505 21:19:00.426624   29367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:19:00.426931   29367 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:19:00.427126   29367 main.go:141] libmachine: (ha-322980) Calling .GetState
	I0505 21:19:00.428655   29367 host.go:66] Checking if "ha-322980" exists ...
	I0505 21:19:00.429056   29367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:19:00.429099   29367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:19:00.444224   29367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34995
	I0505 21:19:00.444622   29367 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:19:00.445055   29367 main.go:141] libmachine: Using API Version  1
	I0505 21:19:00.445077   29367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:19:00.445418   29367 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:19:00.445650   29367 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:19:00.445811   29367 certs.go:68] Setting up /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980 for IP: 192.168.39.29
	I0505 21:19:00.445824   29367 certs.go:194] generating shared ca certs ...
	I0505 21:19:00.445840   29367 certs.go:226] acquiring lock for ca certs: {Name:mk9a8f97724855697074b08194c6247f8f9e5c84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 21:19:00.445966   29367 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key
	I0505 21:19:00.446007   29367 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key
	I0505 21:19:00.446016   29367 certs.go:256] generating profile certs ...
	I0505 21:19:00.446078   29367 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/client.key
	I0505 21:19:00.446115   29367 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key.851dd8e3
	I0505 21:19:00.446128   29367 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt.851dd8e3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.178 192.168.39.228 192.168.39.29 192.168.39.254]
	I0505 21:19:00.557007   29367 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt.851dd8e3 ...
	I0505 21:19:00.557038   29367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt.851dd8e3: {Name:mkeabfd63b086fbe6c5a694b37c05a9029ccc5ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 21:19:00.557219   29367 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key.851dd8e3 ...
	I0505 21:19:00.557237   29367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key.851dd8e3: {Name:mkcf261d94995a12f366032c627df88044d19e5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 21:19:00.557308   29367 certs.go:381] copying /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt.851dd8e3 -> /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt
	I0505 21:19:00.557425   29367 certs.go:385] copying /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key.851dd8e3 -> /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key
	I0505 21:19:00.557541   29367 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.key
	I0505 21:19:00.557556   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0505 21:19:00.557570   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0505 21:19:00.557583   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0505 21:19:00.557595   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0505 21:19:00.557607   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0505 21:19:00.557618   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0505 21:19:00.557631   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0505 21:19:00.557642   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0505 21:19:00.557689   29367 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798.pem (1338 bytes)
	W0505 21:19:00.557732   29367 certs.go:480] ignoring /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798_empty.pem, impossibly tiny 0 bytes
	I0505 21:19:00.557745   29367 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem (1675 bytes)
	I0505 21:19:00.557778   29367 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem (1078 bytes)
	I0505 21:19:00.557806   29367 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem (1123 bytes)
	I0505 21:19:00.557834   29367 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem (1675 bytes)
	I0505 21:19:00.557883   29367 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem (1708 bytes)
	I0505 21:19:00.557918   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem -> /usr/share/ca-certificates/187982.pem
	I0505 21:19:00.557937   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0505 21:19:00.557953   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798.pem -> /usr/share/ca-certificates/18798.pem
	I0505 21:19:00.557989   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:19:00.561068   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:19:00.561734   29367 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:19:00.561760   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:19:00.561951   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:19:00.562136   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:19:00.562313   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:19:00.562444   29367 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa Username:docker}
	I0505 21:19:00.639783   29367 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0505 21:19:00.646267   29367 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0505 21:19:00.659763   29367 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0505 21:19:00.665438   29367 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0505 21:19:00.677776   29367 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0505 21:19:00.682618   29367 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0505 21:19:00.694377   29367 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0505 21:19:00.699270   29367 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0505 21:19:00.710440   29367 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0505 21:19:00.715212   29367 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0505 21:19:00.726959   29367 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0505 21:19:00.733524   29367 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0505 21:19:00.745987   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0505 21:19:00.776306   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0505 21:19:00.804124   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0505 21:19:00.833539   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0505 21:19:00.860099   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0505 21:19:00.887074   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0505 21:19:00.912781   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0505 21:19:00.939713   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0505 21:19:00.966650   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem --> /usr/share/ca-certificates/187982.pem (1708 bytes)
	I0505 21:19:00.991875   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0505 21:19:01.019615   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798.pem --> /usr/share/ca-certificates/18798.pem (1338 bytes)
	I0505 21:19:01.044884   29367 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0505 21:19:01.064393   29367 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0505 21:19:01.083899   29367 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0505 21:19:01.102815   29367 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0505 21:19:01.123852   29367 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0505 21:19:01.143578   29367 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0505 21:19:01.162569   29367 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0505 21:19:01.181825   29367 ssh_runner.go:195] Run: openssl version
	I0505 21:19:01.187965   29367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/187982.pem && ln -fs /usr/share/ca-certificates/187982.pem /etc/ssl/certs/187982.pem"
	I0505 21:19:01.200357   29367 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/187982.pem
	I0505 21:19:01.205126   29367 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  5 21:11 /usr/share/ca-certificates/187982.pem
	I0505 21:19:01.205173   29367 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/187982.pem
	I0505 21:19:01.211088   29367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/187982.pem /etc/ssl/certs/3ec20f2e.0"
	I0505 21:19:01.223287   29367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0505 21:19:01.235075   29367 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0505 21:19:01.239792   29367 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  5 20:58 /usr/share/ca-certificates/minikubeCA.pem
	I0505 21:19:01.239850   29367 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0505 21:19:01.247145   29367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0505 21:19:01.262467   29367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18798.pem && ln -fs /usr/share/ca-certificates/18798.pem /etc/ssl/certs/18798.pem"
	I0505 21:19:01.275296   29367 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18798.pem
	I0505 21:19:01.280073   29367 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  5 21:11 /usr/share/ca-certificates/18798.pem
	I0505 21:19:01.280134   29367 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18798.pem
	I0505 21:19:01.286359   29367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18798.pem /etc/ssl/certs/51391683.0"
	I0505 21:19:01.298575   29367 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0505 21:19:01.303164   29367 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0505 21:19:01.303230   29367 kubeadm.go:928] updating node {m03 192.168.39.29 8443 v1.30.0 crio true true} ...
	I0505 21:19:01.303328   29367 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-322980-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.29
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-322980 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0505 21:19:01.303359   29367 kube-vip.go:111] generating kube-vip config ...
	I0505 21:19:01.303401   29367 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0505 21:19:01.321789   29367 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0505 21:19:01.321858   29367 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0505 21:19:01.321920   29367 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0505 21:19:01.334314   29367 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0505 21:19:01.334375   29367 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0505 21:19:01.345667   29367 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256
	I0505 21:19:01.345679   29367 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256
	I0505 21:19:01.345697   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/cache/linux/amd64/v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0505 21:19:01.345712   29367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 21:19:01.345667   29367 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0505 21:19:01.345780   29367 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0505 21:19:01.345809   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/cache/linux/amd64/v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0505 21:19:01.345875   29367 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0505 21:19:01.361980   29367 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/cache/linux/amd64/v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0505 21:19:01.361996   29367 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0505 21:19:01.362026   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/cache/linux/amd64/v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0505 21:19:01.362062   29367 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0505 21:19:01.362067   29367 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0505 21:19:01.362090   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/cache/linux/amd64/v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0505 21:19:01.387238   29367 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0505 21:19:01.387273   29367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/cache/linux/amd64/v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0505 21:19:02.379027   29367 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0505 21:19:02.390731   29367 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0505 21:19:02.409464   29367 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0505 21:19:02.428169   29367 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0505 21:19:02.447238   29367 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0505 21:19:02.451984   29367 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0505 21:19:02.466221   29367 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 21:19:02.602089   29367 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0505 21:19:02.622092   29367 host.go:66] Checking if "ha-322980" exists ...
	I0505 21:19:02.622538   29367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:19:02.622588   29367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:19:02.639531   29367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34713
	I0505 21:19:02.639945   29367 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:19:02.640442   29367 main.go:141] libmachine: Using API Version  1
	I0505 21:19:02.640469   29367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:19:02.640781   29367 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:19:02.640976   29367 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:19:02.641134   29367 start.go:316] joinCluster: &{Name:ha-322980 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cluster
Name:ha-322980 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.178 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.29 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 21:19:02.641244   29367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0505 21:19:02.641265   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:19:02.644568   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:19:02.644993   29367 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:19:02.645018   29367 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:19:02.645202   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:19:02.645369   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:19:02.645487   29367 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:19:02.645593   29367 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa Username:docker}
	I0505 21:19:02.821421   29367 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.29 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0505 21:19:02.821470   29367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ere86y.rsom8095c8gt6u0e --discovery-token-ca-cert-hash sha256:6a26f18bad1f4f0bdeab00e50a185d34e4c63698b4d623f2ccf5d34207e02541 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-322980-m03 --control-plane --apiserver-advertise-address=192.168.39.29 --apiserver-bind-port=8443"
	I0505 21:19:27.235707   29367 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ere86y.rsom8095c8gt6u0e --discovery-token-ca-cert-hash sha256:6a26f18bad1f4f0bdeab00e50a185d34e4c63698b4d623f2ccf5d34207e02541 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-322980-m03 --control-plane --apiserver-advertise-address=192.168.39.29 --apiserver-bind-port=8443": (24.41421058s)
	I0505 21:19:27.235750   29367 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0505 21:19:27.795445   29367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-322980-m03 minikube.k8s.io/updated_at=2024_05_05T21_19_27_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=182cbbc99574885c654f8e32902368a71f76ddd3 minikube.k8s.io/name=ha-322980 minikube.k8s.io/primary=false
	I0505 21:19:27.943880   29367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-322980-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0505 21:19:28.087974   29367 start.go:318] duration metric: took 25.446835494s to joinCluster
	I0505 21:19:28.088051   29367 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.29 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0505 21:19:28.089332   29367 out.go:177] * Verifying Kubernetes components...
	I0505 21:19:28.090663   29367 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 21:19:28.088443   29367 config.go:182] Loaded profile config "ha-322980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 21:19:28.402321   29367 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0505 21:19:28.441042   29367 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18602-11466/kubeconfig
	I0505 21:19:28.441463   29367 kapi.go:59] client config for ha-322980: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/client.crt", KeyFile:"/home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/client.key", CAFile:"/home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02a40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0505 21:19:28.441552   29367 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.178:8443
	I0505 21:19:28.441814   29367 node_ready.go:35] waiting up to 6m0s for node "ha-322980-m03" to be "Ready" ...
	I0505 21:19:28.441906   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:28.441918   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:28.441929   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:28.441938   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:28.445385   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:28.942629   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:28.942657   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:28.942668   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:28.942673   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:28.946547   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:29.442717   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:29.442747   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:29.442758   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:29.442764   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:29.447216   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:19:29.942477   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:29.942497   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:29.942504   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:29.942508   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:29.946281   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:30.442120   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:30.442141   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:30.442148   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:30.442152   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:30.446124   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:30.446790   29367 node_ready.go:53] node "ha-322980-m03" has status "Ready":"False"
	I0505 21:19:30.942072   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:30.942097   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:30.942109   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:30.942115   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:30.963626   29367 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0505 21:19:31.442431   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:31.442457   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:31.442467   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:31.442475   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:31.446018   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:31.942496   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:31.942516   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:31.942528   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:31.942536   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:31.946384   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:32.442609   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:32.442630   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:32.442638   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:32.442643   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:32.446771   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:19:32.448345   29367 node_ready.go:53] node "ha-322980-m03" has status "Ready":"False"
	I0505 21:19:32.942947   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:32.942969   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:32.942977   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:32.942981   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:32.946462   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:33.442291   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:33.442320   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:33.442332   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:33.442339   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:33.447124   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:19:33.942912   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:33.942933   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:33.942941   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:33.942947   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:33.947532   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:19:34.442132   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:34.442163   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:34.442169   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:34.442173   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:34.447683   29367 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0505 21:19:34.448388   29367 node_ready.go:53] node "ha-322980-m03" has status "Ready":"False"
	I0505 21:19:34.942774   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:34.942797   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:34.942805   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:34.942811   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:34.946789   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:35.442514   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:35.442533   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:35.442539   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:35.442544   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:35.446342   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:35.447212   29367 node_ready.go:49] node "ha-322980-m03" has status "Ready":"True"
	I0505 21:19:35.447239   29367 node_ready.go:38] duration metric: took 7.005404581s for node "ha-322980-m03" to be "Ready" ...
	I0505 21:19:35.447252   29367 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0505 21:19:35.447326   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods
	I0505 21:19:35.447342   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:35.447352   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:35.447359   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:35.454461   29367 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0505 21:19:35.462354   29367 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-78zmw" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:35.462426   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-78zmw
	I0505 21:19:35.462435   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:35.462443   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:35.462447   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:35.466152   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:35.467069   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980
	I0505 21:19:35.467088   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:35.467097   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:35.467103   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:35.470307   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:35.470845   29367 pod_ready.go:92] pod "coredns-7db6d8ff4d-78zmw" in "kube-system" namespace has status "Ready":"True"
	I0505 21:19:35.470864   29367 pod_ready.go:81] duration metric: took 8.486217ms for pod "coredns-7db6d8ff4d-78zmw" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:35.470873   29367 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fqt45" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:35.470927   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fqt45
	I0505 21:19:35.470936   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:35.470943   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:35.470947   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:35.474030   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:35.474923   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980
	I0505 21:19:35.474946   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:35.474957   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:35.474962   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:35.478560   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:35.479299   29367 pod_ready.go:92] pod "coredns-7db6d8ff4d-fqt45" in "kube-system" namespace has status "Ready":"True"
	I0505 21:19:35.479323   29367 pod_ready.go:81] duration metric: took 8.442107ms for pod "coredns-7db6d8ff4d-fqt45" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:35.479335   29367 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-322980" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:35.479404   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/etcd-ha-322980
	I0505 21:19:35.479415   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:35.479425   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:35.479431   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:35.482559   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:35.483116   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980
	I0505 21:19:35.483129   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:35.483136   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:35.483139   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:35.486243   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:35.486789   29367 pod_ready.go:92] pod "etcd-ha-322980" in "kube-system" namespace has status "Ready":"True"
	I0505 21:19:35.486808   29367 pod_ready.go:81] duration metric: took 7.466072ms for pod "etcd-ha-322980" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:35.486818   29367 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-322980-m02" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:35.486861   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/etcd-ha-322980-m02
	I0505 21:19:35.486871   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:35.486878   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:35.486882   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:35.490279   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:35.490751   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:19:35.490768   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:35.490778   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:35.490786   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:35.494034   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:35.494650   29367 pod_ready.go:92] pod "etcd-ha-322980-m02" in "kube-system" namespace has status "Ready":"True"
	I0505 21:19:35.494665   29367 pod_ready.go:81] duration metric: took 7.842312ms for pod "etcd-ha-322980-m02" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:35.494673   29367 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-322980-m03" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:35.643103   29367 request.go:629] Waited for 148.371982ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/etcd-ha-322980-m03
	I0505 21:19:35.643189   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/etcd-ha-322980-m03
	I0505 21:19:35.643198   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:35.643206   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:35.643212   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:35.647580   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:19:35.843087   29367 request.go:629] Waited for 194.428682ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:35.843166   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:35.843174   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:35.843189   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:35.843203   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:35.846828   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:35.847895   29367 pod_ready.go:92] pod "etcd-ha-322980-m03" in "kube-system" namespace has status "Ready":"True"
	I0505 21:19:35.847918   29367 pod_ready.go:81] duration metric: took 353.238939ms for pod "etcd-ha-322980-m03" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:35.847943   29367 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-322980" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:36.043049   29367 request.go:629] Waited for 195.034663ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-322980
	I0505 21:19:36.043136   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-322980
	I0505 21:19:36.043146   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:36.043162   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:36.043175   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:36.050109   29367 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0505 21:19:36.243498   29367 request.go:629] Waited for 192.350383ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/nodes/ha-322980
	I0505 21:19:36.243561   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980
	I0505 21:19:36.243572   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:36.243582   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:36.243591   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:36.247268   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:36.248083   29367 pod_ready.go:92] pod "kube-apiserver-ha-322980" in "kube-system" namespace has status "Ready":"True"
	I0505 21:19:36.248102   29367 pod_ready.go:81] duration metric: took 400.150655ms for pod "kube-apiserver-ha-322980" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:36.248112   29367 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-322980-m02" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:36.443249   29367 request.go:629] Waited for 195.071058ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-322980-m02
	I0505 21:19:36.443320   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-322980-m02
	I0505 21:19:36.443325   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:36.443334   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:36.443341   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:36.447570   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:19:36.643595   29367 request.go:629] Waited for 195.374318ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:19:36.643682   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:19:36.643697   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:36.643712   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:36.643719   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:36.648195   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:19:36.649103   29367 pod_ready.go:92] pod "kube-apiserver-ha-322980-m02" in "kube-system" namespace has status "Ready":"True"
	I0505 21:19:36.649128   29367 pod_ready.go:81] duration metric: took 401.00883ms for pod "kube-apiserver-ha-322980-m02" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:36.649143   29367 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-322980-m03" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:36.843546   29367 request.go:629] Waited for 194.319072ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-322980-m03
	I0505 21:19:36.843609   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-322980-m03
	I0505 21:19:36.843614   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:36.843631   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:36.843637   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:36.847887   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:19:37.042736   29367 request.go:629] Waited for 194.236068ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:37.042806   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:37.042812   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:37.042819   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:37.042826   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:37.046788   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:37.242545   29367 request.go:629] Waited for 93.237949ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-322980-m03
	I0505 21:19:37.242627   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-322980-m03
	I0505 21:19:37.242634   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:37.242648   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:37.242657   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:37.246071   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:37.443574   29367 request.go:629] Waited for 196.323769ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:37.443666   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:37.443680   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:37.443692   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:37.443700   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:37.448543   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:19:37.649890   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-322980-m03
	I0505 21:19:37.649914   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:37.649925   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:37.649935   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:37.653774   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:37.843060   29367 request.go:629] Waited for 188.386721ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:37.843122   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:37.843137   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:37.843143   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:37.843147   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:37.847679   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:19:38.149417   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-322980-m03
	I0505 21:19:38.149442   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:38.149451   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:38.149456   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:38.152908   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:38.242684   29367 request.go:629] Waited for 88.923377ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:38.242731   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:38.242736   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:38.242744   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:38.242747   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:38.246536   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:38.650252   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-322980-m03
	I0505 21:19:38.650278   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:38.650289   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:38.650296   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:38.653966   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:38.655004   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:38.655022   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:38.655032   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:38.655038   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:38.659818   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:19:38.660414   29367 pod_ready.go:102] pod "kube-apiserver-ha-322980-m03" in "kube-system" namespace has status "Ready":"False"
	I0505 21:19:39.149705   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-322980-m03
	I0505 21:19:39.149730   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:39.149741   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:39.149747   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:39.153246   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:39.154213   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:39.154232   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:39.154243   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:39.154248   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:39.157387   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:39.650228   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-322980-m03
	I0505 21:19:39.650249   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:39.650257   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:39.650261   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:39.654174   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:39.655176   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:39.655199   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:39.655206   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:39.655213   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:39.658325   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:40.149455   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-322980-m03
	I0505 21:19:40.149478   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:40.149486   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:40.149492   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:40.153620   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:19:40.154589   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:40.154605   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:40.154612   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:40.154617   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:40.157755   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:40.649467   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-322980-m03
	I0505 21:19:40.649494   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:40.649502   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:40.649506   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:40.653345   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:40.654392   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:40.654411   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:40.654421   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:40.654433   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:40.657548   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:41.149908   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-322980-m03
	I0505 21:19:41.149935   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:41.149945   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:41.149953   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:41.154123   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:19:41.155133   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:41.155152   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:41.155159   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:41.155163   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:41.158195   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:41.158975   29367 pod_ready.go:102] pod "kube-apiserver-ha-322980-m03" in "kube-system" namespace has status "Ready":"False"
	I0505 21:19:41.649749   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-322980-m03
	I0505 21:19:41.649775   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:41.649787   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:41.649794   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:41.654568   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:19:41.656551   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:41.656565   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:41.656572   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:41.656577   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:41.659941   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:42.150082   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-322980-m03
	I0505 21:19:42.150105   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:42.150113   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:42.150116   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:42.153424   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:42.154599   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:42.154616   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:42.154625   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:42.154631   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:42.158203   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:42.649950   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-322980-m03
	I0505 21:19:42.649988   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:42.649996   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:42.650005   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:42.654062   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:19:42.655084   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:42.655103   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:42.655115   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:42.655121   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:42.658378   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:43.149411   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-322980-m03
	I0505 21:19:43.149435   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:43.149447   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:43.149453   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:43.152549   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:43.153483   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:43.153500   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:43.153510   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:43.153520   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:43.156274   29367 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 21:19:43.156887   29367 pod_ready.go:92] pod "kube-apiserver-ha-322980-m03" in "kube-system" namespace has status "Ready":"True"
	I0505 21:19:43.156905   29367 pod_ready.go:81] duration metric: took 6.507754855s for pod "kube-apiserver-ha-322980-m03" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:43.156914   29367 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-322980" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:43.156962   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-322980
	I0505 21:19:43.156970   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:43.156977   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:43.156982   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:43.159900   29367 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 21:19:43.160433   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980
	I0505 21:19:43.160447   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:43.160454   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:43.160458   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:43.163045   29367 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 21:19:43.163577   29367 pod_ready.go:92] pod "kube-controller-manager-ha-322980" in "kube-system" namespace has status "Ready":"True"
	I0505 21:19:43.163597   29367 pod_ready.go:81] duration metric: took 6.675601ms for pod "kube-controller-manager-ha-322980" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:43.163609   29367 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-322980-m02" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:43.163674   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-322980-m02
	I0505 21:19:43.163685   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:43.163697   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:43.163704   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:43.167101   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:43.167760   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:19:43.167774   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:43.167781   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:43.167786   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:43.170373   29367 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 21:19:43.171104   29367 pod_ready.go:92] pod "kube-controller-manager-ha-322980-m02" in "kube-system" namespace has status "Ready":"True"
	I0505 21:19:43.171122   29367 pod_ready.go:81] duration metric: took 7.503084ms for pod "kube-controller-manager-ha-322980-m02" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:43.171131   29367 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-322980-m03" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:43.243221   29367 request.go:629] Waited for 72.041665ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-322980-m03
	I0505 21:19:43.243279   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-322980-m03
	I0505 21:19:43.243284   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:43.243296   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:43.243300   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:43.246923   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:43.443103   29367 request.go:629] Waited for 195.403606ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:43.443188   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:43.443194   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:43.443201   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:43.443206   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:43.447489   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:19:43.448483   29367 pod_ready.go:92] pod "kube-controller-manager-ha-322980-m03" in "kube-system" namespace has status "Ready":"True"
	I0505 21:19:43.448501   29367 pod_ready.go:81] duration metric: took 277.36467ms for pod "kube-controller-manager-ha-322980-m03" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:43.448511   29367 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8xdzd" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:43.642757   29367 request.go:629] Waited for 194.191312ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8xdzd
	I0505 21:19:43.642848   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8xdzd
	I0505 21:19:43.642856   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:43.642871   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:43.642881   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:43.646980   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:19:43.842881   29367 request.go:629] Waited for 195.087599ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/nodes/ha-322980
	I0505 21:19:43.842936   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980
	I0505 21:19:43.842945   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:43.842957   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:43.842965   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:43.846087   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:43.846720   29367 pod_ready.go:92] pod "kube-proxy-8xdzd" in "kube-system" namespace has status "Ready":"True"
	I0505 21:19:43.846735   29367 pod_ready.go:81] duration metric: took 398.218356ms for pod "kube-proxy-8xdzd" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:43.846744   29367 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nqq6b" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:44.042858   29367 request.go:629] Waited for 196.051735ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nqq6b
	I0505 21:19:44.042957   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nqq6b
	I0505 21:19:44.042970   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:44.042980   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:44.042986   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:44.046927   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:44.243099   29367 request.go:629] Waited for 195.356238ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:44.243181   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:44.243188   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:44.243195   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:44.243199   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:44.246854   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:44.247640   29367 pod_ready.go:92] pod "kube-proxy-nqq6b" in "kube-system" namespace has status "Ready":"True"
	I0505 21:19:44.247658   29367 pod_ready.go:81] duration metric: took 400.907383ms for pod "kube-proxy-nqq6b" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:44.247679   29367 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wbf7q" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:44.442804   29367 request.go:629] Waited for 195.070743ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wbf7q
	I0505 21:19:44.442860   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wbf7q
	I0505 21:19:44.442865   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:44.442872   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:44.442876   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:44.446754   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:44.643247   29367 request.go:629] Waited for 195.334258ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:19:44.643307   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:19:44.643318   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:44.643329   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:44.643336   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:44.647623   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:19:44.648755   29367 pod_ready.go:92] pod "kube-proxy-wbf7q" in "kube-system" namespace has status "Ready":"True"
	I0505 21:19:44.648774   29367 pod_ready.go:81] duration metric: took 401.089611ms for pod "kube-proxy-wbf7q" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:44.648784   29367 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-322980" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:44.842799   29367 request.go:629] Waited for 193.905816ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-322980
	I0505 21:19:44.842868   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-322980
	I0505 21:19:44.842874   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:44.842881   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:44.842886   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:44.846964   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:19:45.043128   29367 request.go:629] Waited for 195.357501ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/nodes/ha-322980
	I0505 21:19:45.043183   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980
	I0505 21:19:45.043190   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:45.043201   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:45.043208   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:45.047472   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:19:45.048467   29367 pod_ready.go:92] pod "kube-scheduler-ha-322980" in "kube-system" namespace has status "Ready":"True"
	I0505 21:19:45.048485   29367 pod_ready.go:81] duration metric: took 399.695996ms for pod "kube-scheduler-ha-322980" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:45.048496   29367 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-322980-m02" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:45.242547   29367 request.go:629] Waited for 193.994855ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-322980-m02
	I0505 21:19:45.242599   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-322980-m02
	I0505 21:19:45.242604   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:45.242611   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:45.242615   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:45.246122   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:45.443521   29367 request.go:629] Waited for 196.571897ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:19:45.443576   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02
	I0505 21:19:45.443582   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:45.443589   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:45.443596   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:45.447402   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:45.448284   29367 pod_ready.go:92] pod "kube-scheduler-ha-322980-m02" in "kube-system" namespace has status "Ready":"True"
	I0505 21:19:45.448304   29367 pod_ready.go:81] duration metric: took 399.802534ms for pod "kube-scheduler-ha-322980-m02" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:45.448314   29367 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-322980-m03" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:45.643553   29367 request.go:629] Waited for 195.18216ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-322980-m03
	I0505 21:19:45.643642   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-322980-m03
	I0505 21:19:45.643660   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:45.643675   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:45.643685   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:45.647166   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:45.842567   29367 request.go:629] Waited for 194.47774ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:45.842660   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes/ha-322980-m03
	I0505 21:19:45.842668   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:45.842680   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:45.842686   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:45.846121   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:45.846857   29367 pod_ready.go:92] pod "kube-scheduler-ha-322980-m03" in "kube-system" namespace has status "Ready":"True"
	I0505 21:19:45.846880   29367 pod_ready.go:81] duration metric: took 398.558422ms for pod "kube-scheduler-ha-322980-m03" in "kube-system" namespace to be "Ready" ...
	I0505 21:19:45.846894   29367 pod_ready.go:38] duration metric: took 10.399629772s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0505 21:19:45.846922   29367 api_server.go:52] waiting for apiserver process to appear ...
	I0505 21:19:45.846990   29367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 21:19:45.865989   29367 api_server.go:72] duration metric: took 17.77790312s to wait for apiserver process to appear ...
	I0505 21:19:45.866011   29367 api_server.go:88] waiting for apiserver healthz status ...
	I0505 21:19:45.866032   29367 api_server.go:253] Checking apiserver healthz at https://192.168.39.178:8443/healthz ...
	I0505 21:19:45.872618   29367 api_server.go:279] https://192.168.39.178:8443/healthz returned 200:
	ok
	I0505 21:19:45.872680   29367 round_trippers.go:463] GET https://192.168.39.178:8443/version
	I0505 21:19:45.872703   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:45.872713   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:45.872721   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:45.873554   29367 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0505 21:19:45.873605   29367 api_server.go:141] control plane version: v1.30.0
	I0505 21:19:45.873618   29367 api_server.go:131] duration metric: took 7.601764ms to wait for apiserver health ...
	I0505 21:19:45.873626   29367 system_pods.go:43] waiting for kube-system pods to appear ...
	I0505 21:19:46.043033   29367 request.go:629] Waited for 169.347897ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods
	I0505 21:19:46.043114   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods
	I0505 21:19:46.043129   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:46.043140   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:46.043150   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:46.051571   29367 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0505 21:19:46.059684   29367 system_pods.go:59] 24 kube-system pods found
	I0505 21:19:46.059712   29367 system_pods.go:61] "coredns-7db6d8ff4d-78zmw" [e066e3ad-0574-44f9-acab-d7cec8b86788] Running
	I0505 21:19:46.059717   29367 system_pods.go:61] "coredns-7db6d8ff4d-fqt45" [27bdadca-f49c-4f50-b09c-07dd6067f39a] Running
	I0505 21:19:46.059721   29367 system_pods.go:61] "etcd-ha-322980" [71d18dfd-9116-470d-bc5b-808adc8d2440] Running
	I0505 21:19:46.059725   29367 system_pods.go:61] "etcd-ha-322980-m02" [d38daa43-9f21-433e-8f83-67d492e7eb6f] Running
	I0505 21:19:46.059728   29367 system_pods.go:61] "etcd-ha-322980-m03" [15754f58-e7a0-4f74-b448-d1b628a32678] Running
	I0505 21:19:46.059731   29367 system_pods.go:61] "kindnet-ks55j" [d7afae98-1d61-43b1-ac25-c085e289db4d] Running
	I0505 21:19:46.059734   29367 system_pods.go:61] "kindnet-lmgkm" [78b6e816-d020-4105-b15c-3142323a6627] Running
	I0505 21:19:46.059736   29367 system_pods.go:61] "kindnet-lwtnx" [4033535e-69f1-426c-bb17-831fad6336d5] Running
	I0505 21:19:46.059741   29367 system_pods.go:61] "kube-apiserver-ha-322980" [feaf1c0e-9d36-499a-861f-e92298c928b8] Running
	I0505 21:19:46.059744   29367 system_pods.go:61] "kube-apiserver-ha-322980-m02" [b5d35b3b-53cb-4dc7-9ecd-3cea1362ac0e] Running
	I0505 21:19:46.059747   29367 system_pods.go:61] "kube-apiserver-ha-322980-m03" [575db24d-e297-4995-903b-34d0c3a2a268] Running
	I0505 21:19:46.059751   29367 system_pods.go:61] "kube-controller-manager-ha-322980" [e7bf9f5c-70c2-46a5-bb4d-861a17fd3d64] Running
	I0505 21:19:46.059754   29367 system_pods.go:61] "kube-controller-manager-ha-322980-m02" [7a85d624-043e-4429-afc4-831a59c6f349] Running
	I0505 21:19:46.059757   29367 system_pods.go:61] "kube-controller-manager-ha-322980-m03" [acdc19e3-d12c-4c23-86f0-b10845b406ce] Running
	I0505 21:19:46.059760   29367 system_pods.go:61] "kube-proxy-8xdzd" [d0b6492d-c0f5-45dd-8482-c447b81daa66] Running
	I0505 21:19:46.059763   29367 system_pods.go:61] "kube-proxy-nqq6b" [73c9f1e1-7917-43ec-8876-e6f4280ecad3] Running
	I0505 21:19:46.059767   29367 system_pods.go:61] "kube-proxy-wbf7q" [ae43ac77-d16b-4f36-8c27-23d1cf0431e3] Running
	I0505 21:19:46.059772   29367 system_pods.go:61] "kube-scheduler-ha-322980" [b77aa3a9-f911-42e0-9c83-91ae75a0424c] Running
	I0505 21:19:46.059775   29367 system_pods.go:61] "kube-scheduler-ha-322980-m02" [d011a166-d283-4933-a48f-eb36000d04a1] Running
	I0505 21:19:46.059778   29367 system_pods.go:61] "kube-scheduler-ha-322980-m03" [15c200c1-1945-43fa-87c7-900bb219da1d] Running
	I0505 21:19:46.059784   29367 system_pods.go:61] "kube-vip-ha-322980" [8743dbcc-49f9-46e8-8088-cd5020429c08] Running
	I0505 21:19:46.059787   29367 system_pods.go:61] "kube-vip-ha-322980-m02" [3cacc089-9d3d-4da6-9bce-0777ffe737d1] Running
	I0505 21:19:46.059790   29367 system_pods.go:61] "kube-vip-ha-322980-m03" [5083810a-dbf0-4a5f-9006-02673bc8d1c7] Running
	I0505 21:19:46.059793   29367 system_pods.go:61] "storage-provisioner" [bc212ac3-7499-4edc-b5a5-622b0bd4a891] Running
	I0505 21:19:46.059798   29367 system_pods.go:74] duration metric: took 186.165526ms to wait for pod list to return data ...
	I0505 21:19:46.059809   29367 default_sa.go:34] waiting for default service account to be created ...
	I0505 21:19:46.242614   29367 request.go:629] Waited for 182.734312ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/namespaces/default/serviceaccounts
	I0505 21:19:46.242670   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/default/serviceaccounts
	I0505 21:19:46.242679   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:46.242687   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:46.242691   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:46.246676   29367 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 21:19:46.246923   29367 default_sa.go:45] found service account: "default"
	I0505 21:19:46.246946   29367 default_sa.go:55] duration metric: took 187.130677ms for default service account to be created ...
	I0505 21:19:46.246957   29367 system_pods.go:116] waiting for k8s-apps to be running ...
	I0505 21:19:46.443278   29367 request.go:629] Waited for 196.260889ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods
	I0505 21:19:46.443331   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/namespaces/kube-system/pods
	I0505 21:19:46.443336   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:46.443343   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:46.443347   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:46.450152   29367 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0505 21:19:46.457066   29367 system_pods.go:86] 24 kube-system pods found
	I0505 21:19:46.457093   29367 system_pods.go:89] "coredns-7db6d8ff4d-78zmw" [e066e3ad-0574-44f9-acab-d7cec8b86788] Running
	I0505 21:19:46.457101   29367 system_pods.go:89] "coredns-7db6d8ff4d-fqt45" [27bdadca-f49c-4f50-b09c-07dd6067f39a] Running
	I0505 21:19:46.457107   29367 system_pods.go:89] "etcd-ha-322980" [71d18dfd-9116-470d-bc5b-808adc8d2440] Running
	I0505 21:19:46.457113   29367 system_pods.go:89] "etcd-ha-322980-m02" [d38daa43-9f21-433e-8f83-67d492e7eb6f] Running
	I0505 21:19:46.457119   29367 system_pods.go:89] "etcd-ha-322980-m03" [15754f58-e7a0-4f74-b448-d1b628a32678] Running
	I0505 21:19:46.457125   29367 system_pods.go:89] "kindnet-ks55j" [d7afae98-1d61-43b1-ac25-c085e289db4d] Running
	I0505 21:19:46.457131   29367 system_pods.go:89] "kindnet-lmgkm" [78b6e816-d020-4105-b15c-3142323a6627] Running
	I0505 21:19:46.457137   29367 system_pods.go:89] "kindnet-lwtnx" [4033535e-69f1-426c-bb17-831fad6336d5] Running
	I0505 21:19:46.457144   29367 system_pods.go:89] "kube-apiserver-ha-322980" [feaf1c0e-9d36-499a-861f-e92298c928b8] Running
	I0505 21:19:46.457154   29367 system_pods.go:89] "kube-apiserver-ha-322980-m02" [b5d35b3b-53cb-4dc7-9ecd-3cea1362ac0e] Running
	I0505 21:19:46.457160   29367 system_pods.go:89] "kube-apiserver-ha-322980-m03" [575db24d-e297-4995-903b-34d0c3a2a268] Running
	I0505 21:19:46.457168   29367 system_pods.go:89] "kube-controller-manager-ha-322980" [e7bf9f5c-70c2-46a5-bb4d-861a17fd3d64] Running
	I0505 21:19:46.457176   29367 system_pods.go:89] "kube-controller-manager-ha-322980-m02" [7a85d624-043e-4429-afc4-831a59c6f349] Running
	I0505 21:19:46.457186   29367 system_pods.go:89] "kube-controller-manager-ha-322980-m03" [acdc19e3-d12c-4c23-86f0-b10845b406ce] Running
	I0505 21:19:46.457194   29367 system_pods.go:89] "kube-proxy-8xdzd" [d0b6492d-c0f5-45dd-8482-c447b81daa66] Running
	I0505 21:19:46.457214   29367 system_pods.go:89] "kube-proxy-nqq6b" [73c9f1e1-7917-43ec-8876-e6f4280ecad3] Running
	I0505 21:19:46.457222   29367 system_pods.go:89] "kube-proxy-wbf7q" [ae43ac77-d16b-4f36-8c27-23d1cf0431e3] Running
	I0505 21:19:46.457232   29367 system_pods.go:89] "kube-scheduler-ha-322980" [b77aa3a9-f911-42e0-9c83-91ae75a0424c] Running
	I0505 21:19:46.457239   29367 system_pods.go:89] "kube-scheduler-ha-322980-m02" [d011a166-d283-4933-a48f-eb36000d04a1] Running
	I0505 21:19:46.457249   29367 system_pods.go:89] "kube-scheduler-ha-322980-m03" [15c200c1-1945-43fa-87c7-900bb219da1d] Running
	I0505 21:19:46.457256   29367 system_pods.go:89] "kube-vip-ha-322980" [8743dbcc-49f9-46e8-8088-cd5020429c08] Running
	I0505 21:19:46.457265   29367 system_pods.go:89] "kube-vip-ha-322980-m02" [3cacc089-9d3d-4da6-9bce-0777ffe737d1] Running
	I0505 21:19:46.457274   29367 system_pods.go:89] "kube-vip-ha-322980-m03" [5083810a-dbf0-4a5f-9006-02673bc8d1c7] Running
	I0505 21:19:46.457282   29367 system_pods.go:89] "storage-provisioner" [bc212ac3-7499-4edc-b5a5-622b0bd4a891] Running
	I0505 21:19:46.457292   29367 system_pods.go:126] duration metric: took 210.328387ms to wait for k8s-apps to be running ...
	I0505 21:19:46.457305   29367 system_svc.go:44] waiting for kubelet service to be running ....
	I0505 21:19:46.457355   29367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 21:19:46.475862   29367 system_svc.go:56] duration metric: took 18.552221ms WaitForService to wait for kubelet
	I0505 21:19:46.475887   29367 kubeadm.go:576] duration metric: took 18.38780276s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0505 21:19:46.475909   29367 node_conditions.go:102] verifying NodePressure condition ...
	I0505 21:19:46.643292   29367 request.go:629] Waited for 167.315134ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.178:8443/api/v1/nodes
	I0505 21:19:46.643352   29367 round_trippers.go:463] GET https://192.168.39.178:8443/api/v1/nodes
	I0505 21:19:46.643357   29367 round_trippers.go:469] Request Headers:
	I0505 21:19:46.643364   29367 round_trippers.go:473]     Accept: application/json, */*
	I0505 21:19:46.643368   29367 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0505 21:19:46.647544   29367 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 21:19:46.648876   29367 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0505 21:19:46.648897   29367 node_conditions.go:123] node cpu capacity is 2
	I0505 21:19:46.648908   29367 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0505 21:19:46.648912   29367 node_conditions.go:123] node cpu capacity is 2
	I0505 21:19:46.648916   29367 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0505 21:19:46.648919   29367 node_conditions.go:123] node cpu capacity is 2
	I0505 21:19:46.648923   29367 node_conditions.go:105] duration metric: took 173.008596ms to run NodePressure ...
	I0505 21:19:46.648937   29367 start.go:240] waiting for startup goroutines ...
	I0505 21:19:46.648959   29367 start.go:254] writing updated cluster config ...
	I0505 21:19:46.649219   29367 ssh_runner.go:195] Run: rm -f paused
	I0505 21:19:46.698818   29367 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0505 21:19:46.700817   29367 out.go:177] * Done! kubectl is now configured to use "ha-322980" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	May 05 21:24:48 ha-322980 crio[687]: time="2024-05-05 21:24:48.189365547Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714944288189335947,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=686292c0-92b8-4eba-9f32-201888f2b297 name=/runtime.v1.ImageService/ImageFsInfo
	May 05 21:24:48 ha-322980 crio[687]: time="2024-05-05 21:24:48.189926581Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b641cef9-5514-4898-b7bb-e84fe2a34e43 name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:24:48 ha-322980 crio[687]: time="2024-05-05 21:24:48.190016771Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b641cef9-5514-4898-b7bb-e84fe2a34e43 name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:24:48 ha-322980 crio[687]: time="2024-05-05 21:24:48.190263313Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d9743f3da0de5672bc067b03d1bf5a1bd2b516c0135ee81ae43c0d2cab9bcfdf,PodSandboxId:238b5b24a572eba24b52bf72fafa80a3a1105acc4ba0c3e58b255a5c6a8a7bc2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714943992554925716,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xt9l5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bbde9685-4494-40b7-bd53-9452fd970f5a,},Annotations:map[string]string{io.kubernetes.container.hash: ce6d6b7c,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e065fafa4b7aad342c5755ddbb2c69c3b4bd0efd44c9ce792388bc6c6f06121d,PodSandboxId:9f56aff0e5f86dabc75e935bba2e8f81a5f7d30f2613ed055fe026fab10ff612,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714943788390066842,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-78zmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e066e3ad-0574-44f9-acab-d7cec8b86788,},Annotations:map[string]string{io.kubernetes.container.hash: 3cdac550,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b360d142570dfeb8a0253680ccfd97861e90b700b1ce9e2e5b315244aed2a3b,PodSandboxId:cd560b1055b35f9f6500b5eef53e7f8250f4bc20a4f8e6562e8b30bff6235e38,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714943788394468442,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fqt45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
27bdadca-f49c-4f50-b09c-07dd6067f39a,},Annotations:map[string]string{io.kubernetes.container.hash: 8fa26f22,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63d1d40ce592576b3c3adab70629f977f025cc822b6fc2638f0afd5a8034b355,PodSandboxId:bca34597f1572fc75baa0a8f2853d06c3aa8e5aa575d8f5b15abb16332e61951,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1714943788301282386,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc212ac3-7499-4edc-b5a5-622b0bd4a891,},Annotations:map[string]string{io.kubernetes.container.hash: c207764,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57151a6a532bef4496bb9b3b51447c0e86324af7644760046c4f986f9bc000e6,PodSandboxId:1a6fd410f5e049e20c5d538edaa138d7e911b22709b53f7f82ef8a2c18cdc5c4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:171494378
6261169619,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lwtnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4033535e-69f1-426c-bb17-831fad6336d5,},Annotations:map[string]string{io.kubernetes.container.hash: 5400a271,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4da23c67204614119fd12bb11bb2dcc384609fe399a32f06e2ca61ff52a8438c,PodSandboxId:8b3a42343ade0d10dbd7caf52ae4583f67790d020c606261ab52cb7ceda9cd0a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714943786049309122,Labels:map[string]string
{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xdzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0b6492d-c0f5-45dd-8482-c447b81daa66,},Annotations:map[string]string{io.kubernetes.container.hash: 9b4684e6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abf4aae19a40108f61080f90924e47cd17198b595de08afa53c852acb001992f,PodSandboxId:f40e5905346ce4899eeb30b9f2633d4b891261ff9924be9e4b68d323a59bb10b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714943768245843963,Labels:map[string]string{i
o.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6f2a36965b5efe68085ec9ac39d075,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d73ef383ce1ab83dd2b2ada78891a4aa3835de2b4805157ac3e15584d0b4b29b,PodSandboxId:913466e1710aa2d1fa8e45f17fd08f82c114b7242bff697822c1bf5d2e0b7a3c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714943765197348186,Labels:map[string]string{io.kubernetes.container.name: kube
-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c588feae7d6204945d27bedaf4541d64,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b13d21aa2e8e79d9186e7d57f6c9dbcafdde76053f5205ee5d1bb46c65960d4f,PodSandboxId:82abab5bb480d2f5d83a105d70b76f2c29612396a2bf9abc6a4fd61fe5261c21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714943765127511026,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kuberne
tes.pod.name: kube-apiserver-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25cdcec1c37ba86157b0b42297dfe2cf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf39325,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ebcc8c1017ed40ef18eba191c6a6df12e34304aa025d65f0340c08e107ac43d,PodSandboxId:b3b0a14099e308d6e2e5766fe30514d082beb487575e5dbaa3e35778de907dd3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714943765026808480,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes
.pod.name: kube-controller-manager-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 578ccf60a9d00c195d5069c63fb0b319,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97769959b22d6e891106c90e8ae981d40cb02ac960e23bdfb007b3c50d50c923,PodSandboxId:01d81d8dc3bcbbd11e9a335f2a7aee378da1d273b127ffeb0f72b10080cb6bab,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714943765081603384,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-322980,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58f12977082107510fdbb696cd218155,},Annotations:map[string]string{io.kubernetes.container.hash: a7c6c285,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b641cef9-5514-4898-b7bb-e84fe2a34e43 name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:24:48 ha-322980 crio[687]: time="2024-05-05 21:24:48.236362155Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ae6a0987-93e6-4bf4-b375-43764984d202 name=/runtime.v1.RuntimeService/Version
	May 05 21:24:48 ha-322980 crio[687]: time="2024-05-05 21:24:48.236507343Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ae6a0987-93e6-4bf4-b375-43764984d202 name=/runtime.v1.RuntimeService/Version
	May 05 21:24:48 ha-322980 crio[687]: time="2024-05-05 21:24:48.238604200Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7af39295-3c94-47d9-beaf-4eb268500d76 name=/runtime.v1.ImageService/ImageFsInfo
	May 05 21:24:48 ha-322980 crio[687]: time="2024-05-05 21:24:48.239294714Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714944288239268658,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7af39295-3c94-47d9-beaf-4eb268500d76 name=/runtime.v1.ImageService/ImageFsInfo
	May 05 21:24:48 ha-322980 crio[687]: time="2024-05-05 21:24:48.240500512Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aadf150f-e2b3-4cce-8ca1-dbd48877b2c4 name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:24:48 ha-322980 crio[687]: time="2024-05-05 21:24:48.240630552Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aadf150f-e2b3-4cce-8ca1-dbd48877b2c4 name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:24:48 ha-322980 crio[687]: time="2024-05-05 21:24:48.241320708Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d9743f3da0de5672bc067b03d1bf5a1bd2b516c0135ee81ae43c0d2cab9bcfdf,PodSandboxId:238b5b24a572eba24b52bf72fafa80a3a1105acc4ba0c3e58b255a5c6a8a7bc2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714943992554925716,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xt9l5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bbde9685-4494-40b7-bd53-9452fd970f5a,},Annotations:map[string]string{io.kubernetes.container.hash: ce6d6b7c,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e065fafa4b7aad342c5755ddbb2c69c3b4bd0efd44c9ce792388bc6c6f06121d,PodSandboxId:9f56aff0e5f86dabc75e935bba2e8f81a5f7d30f2613ed055fe026fab10ff612,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714943788390066842,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-78zmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e066e3ad-0574-44f9-acab-d7cec8b86788,},Annotations:map[string]string{io.kubernetes.container.hash: 3cdac550,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b360d142570dfeb8a0253680ccfd97861e90b700b1ce9e2e5b315244aed2a3b,PodSandboxId:cd560b1055b35f9f6500b5eef53e7f8250f4bc20a4f8e6562e8b30bff6235e38,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714943788394468442,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fqt45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
27bdadca-f49c-4f50-b09c-07dd6067f39a,},Annotations:map[string]string{io.kubernetes.container.hash: 8fa26f22,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63d1d40ce592576b3c3adab70629f977f025cc822b6fc2638f0afd5a8034b355,PodSandboxId:bca34597f1572fc75baa0a8f2853d06c3aa8e5aa575d8f5b15abb16332e61951,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1714943788301282386,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc212ac3-7499-4edc-b5a5-622b0bd4a891,},Annotations:map[string]string{io.kubernetes.container.hash: c207764,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57151a6a532bef4496bb9b3b51447c0e86324af7644760046c4f986f9bc000e6,PodSandboxId:1a6fd410f5e049e20c5d538edaa138d7e911b22709b53f7f82ef8a2c18cdc5c4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:171494378
6261169619,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lwtnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4033535e-69f1-426c-bb17-831fad6336d5,},Annotations:map[string]string{io.kubernetes.container.hash: 5400a271,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4da23c67204614119fd12bb11bb2dcc384609fe399a32f06e2ca61ff52a8438c,PodSandboxId:8b3a42343ade0d10dbd7caf52ae4583f67790d020c606261ab52cb7ceda9cd0a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714943786049309122,Labels:map[string]string
{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xdzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0b6492d-c0f5-45dd-8482-c447b81daa66,},Annotations:map[string]string{io.kubernetes.container.hash: 9b4684e6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abf4aae19a40108f61080f90924e47cd17198b595de08afa53c852acb001992f,PodSandboxId:f40e5905346ce4899eeb30b9f2633d4b891261ff9924be9e4b68d323a59bb10b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714943768245843963,Labels:map[string]string{i
o.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6f2a36965b5efe68085ec9ac39d075,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d73ef383ce1ab83dd2b2ada78891a4aa3835de2b4805157ac3e15584d0b4b29b,PodSandboxId:913466e1710aa2d1fa8e45f17fd08f82c114b7242bff697822c1bf5d2e0b7a3c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714943765197348186,Labels:map[string]string{io.kubernetes.container.name: kube
-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c588feae7d6204945d27bedaf4541d64,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b13d21aa2e8e79d9186e7d57f6c9dbcafdde76053f5205ee5d1bb46c65960d4f,PodSandboxId:82abab5bb480d2f5d83a105d70b76f2c29612396a2bf9abc6a4fd61fe5261c21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714943765127511026,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kuberne
tes.pod.name: kube-apiserver-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25cdcec1c37ba86157b0b42297dfe2cf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf39325,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ebcc8c1017ed40ef18eba191c6a6df12e34304aa025d65f0340c08e107ac43d,PodSandboxId:b3b0a14099e308d6e2e5766fe30514d082beb487575e5dbaa3e35778de907dd3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714943765026808480,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes
.pod.name: kube-controller-manager-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 578ccf60a9d00c195d5069c63fb0b319,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97769959b22d6e891106c90e8ae981d40cb02ac960e23bdfb007b3c50d50c923,PodSandboxId:01d81d8dc3bcbbd11e9a335f2a7aee378da1d273b127ffeb0f72b10080cb6bab,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714943765081603384,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-322980,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58f12977082107510fdbb696cd218155,},Annotations:map[string]string{io.kubernetes.container.hash: a7c6c285,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aadf150f-e2b3-4cce-8ca1-dbd48877b2c4 name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:24:48 ha-322980 crio[687]: time="2024-05-05 21:24:48.291390673Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3b9203a8-3764-4a3d-9443-9584b4854d6f name=/runtime.v1.RuntimeService/Version
	May 05 21:24:48 ha-322980 crio[687]: time="2024-05-05 21:24:48.291494770Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3b9203a8-3764-4a3d-9443-9584b4854d6f name=/runtime.v1.RuntimeService/Version
	May 05 21:24:48 ha-322980 crio[687]: time="2024-05-05 21:24:48.292854256Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7ac0437c-6ad2-4ff0-85d1-59063c81a1ba name=/runtime.v1.ImageService/ImageFsInfo
	May 05 21:24:48 ha-322980 crio[687]: time="2024-05-05 21:24:48.293364215Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714944288293340409,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7ac0437c-6ad2-4ff0-85d1-59063c81a1ba name=/runtime.v1.ImageService/ImageFsInfo
	May 05 21:24:48 ha-322980 crio[687]: time="2024-05-05 21:24:48.293875808Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0ab22e21-b974-4e9e-a2c0-90f1cad4d073 name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:24:48 ha-322980 crio[687]: time="2024-05-05 21:24:48.293960719Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0ab22e21-b974-4e9e-a2c0-90f1cad4d073 name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:24:48 ha-322980 crio[687]: time="2024-05-05 21:24:48.294258852Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d9743f3da0de5672bc067b03d1bf5a1bd2b516c0135ee81ae43c0d2cab9bcfdf,PodSandboxId:238b5b24a572eba24b52bf72fafa80a3a1105acc4ba0c3e58b255a5c6a8a7bc2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714943992554925716,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xt9l5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bbde9685-4494-40b7-bd53-9452fd970f5a,},Annotations:map[string]string{io.kubernetes.container.hash: ce6d6b7c,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e065fafa4b7aad342c5755ddbb2c69c3b4bd0efd44c9ce792388bc6c6f06121d,PodSandboxId:9f56aff0e5f86dabc75e935bba2e8f81a5f7d30f2613ed055fe026fab10ff612,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714943788390066842,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-78zmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e066e3ad-0574-44f9-acab-d7cec8b86788,},Annotations:map[string]string{io.kubernetes.container.hash: 3cdac550,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b360d142570dfeb8a0253680ccfd97861e90b700b1ce9e2e5b315244aed2a3b,PodSandboxId:cd560b1055b35f9f6500b5eef53e7f8250f4bc20a4f8e6562e8b30bff6235e38,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714943788394468442,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fqt45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
27bdadca-f49c-4f50-b09c-07dd6067f39a,},Annotations:map[string]string{io.kubernetes.container.hash: 8fa26f22,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63d1d40ce592576b3c3adab70629f977f025cc822b6fc2638f0afd5a8034b355,PodSandboxId:bca34597f1572fc75baa0a8f2853d06c3aa8e5aa575d8f5b15abb16332e61951,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1714943788301282386,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc212ac3-7499-4edc-b5a5-622b0bd4a891,},Annotations:map[string]string{io.kubernetes.container.hash: c207764,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57151a6a532bef4496bb9b3b51447c0e86324af7644760046c4f986f9bc000e6,PodSandboxId:1a6fd410f5e049e20c5d538edaa138d7e911b22709b53f7f82ef8a2c18cdc5c4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:171494378
6261169619,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lwtnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4033535e-69f1-426c-bb17-831fad6336d5,},Annotations:map[string]string{io.kubernetes.container.hash: 5400a271,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4da23c67204614119fd12bb11bb2dcc384609fe399a32f06e2ca61ff52a8438c,PodSandboxId:8b3a42343ade0d10dbd7caf52ae4583f67790d020c606261ab52cb7ceda9cd0a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714943786049309122,Labels:map[string]string
{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xdzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0b6492d-c0f5-45dd-8482-c447b81daa66,},Annotations:map[string]string{io.kubernetes.container.hash: 9b4684e6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abf4aae19a40108f61080f90924e47cd17198b595de08afa53c852acb001992f,PodSandboxId:f40e5905346ce4899eeb30b9f2633d4b891261ff9924be9e4b68d323a59bb10b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714943768245843963,Labels:map[string]string{i
o.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6f2a36965b5efe68085ec9ac39d075,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d73ef383ce1ab83dd2b2ada78891a4aa3835de2b4805157ac3e15584d0b4b29b,PodSandboxId:913466e1710aa2d1fa8e45f17fd08f82c114b7242bff697822c1bf5d2e0b7a3c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714943765197348186,Labels:map[string]string{io.kubernetes.container.name: kube
-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c588feae7d6204945d27bedaf4541d64,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b13d21aa2e8e79d9186e7d57f6c9dbcafdde76053f5205ee5d1bb46c65960d4f,PodSandboxId:82abab5bb480d2f5d83a105d70b76f2c29612396a2bf9abc6a4fd61fe5261c21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714943765127511026,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kuberne
tes.pod.name: kube-apiserver-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25cdcec1c37ba86157b0b42297dfe2cf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf39325,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ebcc8c1017ed40ef18eba191c6a6df12e34304aa025d65f0340c08e107ac43d,PodSandboxId:b3b0a14099e308d6e2e5766fe30514d082beb487575e5dbaa3e35778de907dd3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714943765026808480,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes
.pod.name: kube-controller-manager-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 578ccf60a9d00c195d5069c63fb0b319,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97769959b22d6e891106c90e8ae981d40cb02ac960e23bdfb007b3c50d50c923,PodSandboxId:01d81d8dc3bcbbd11e9a335f2a7aee378da1d273b127ffeb0f72b10080cb6bab,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714943765081603384,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-322980,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58f12977082107510fdbb696cd218155,},Annotations:map[string]string{io.kubernetes.container.hash: a7c6c285,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0ab22e21-b974-4e9e-a2c0-90f1cad4d073 name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:24:48 ha-322980 crio[687]: time="2024-05-05 21:24:48.340234734Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=15be87f6-060d-47fe-8c8e-520ca629898e name=/runtime.v1.RuntimeService/Version
	May 05 21:24:48 ha-322980 crio[687]: time="2024-05-05 21:24:48.340374951Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=15be87f6-060d-47fe-8c8e-520ca629898e name=/runtime.v1.RuntimeService/Version
	May 05 21:24:48 ha-322980 crio[687]: time="2024-05-05 21:24:48.342420803Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b97d8c73-7962-426a-aef4-7aebc1b9a1b6 name=/runtime.v1.ImageService/ImageFsInfo
	May 05 21:24:48 ha-322980 crio[687]: time="2024-05-05 21:24:48.342933797Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714944288342911827,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b97d8c73-7962-426a-aef4-7aebc1b9a1b6 name=/runtime.v1.ImageService/ImageFsInfo
	May 05 21:24:48 ha-322980 crio[687]: time="2024-05-05 21:24:48.343525330Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=06aca272-d736-4cd6-af74-97201485d439 name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:24:48 ha-322980 crio[687]: time="2024-05-05 21:24:48.343602330Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=06aca272-d736-4cd6-af74-97201485d439 name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:24:48 ha-322980 crio[687]: time="2024-05-05 21:24:48.343990468Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d9743f3da0de5672bc067b03d1bf5a1bd2b516c0135ee81ae43c0d2cab9bcfdf,PodSandboxId:238b5b24a572eba24b52bf72fafa80a3a1105acc4ba0c3e58b255a5c6a8a7bc2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714943992554925716,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xt9l5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bbde9685-4494-40b7-bd53-9452fd970f5a,},Annotations:map[string]string{io.kubernetes.container.hash: ce6d6b7c,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e065fafa4b7aad342c5755ddbb2c69c3b4bd0efd44c9ce792388bc6c6f06121d,PodSandboxId:9f56aff0e5f86dabc75e935bba2e8f81a5f7d30f2613ed055fe026fab10ff612,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714943788390066842,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-78zmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e066e3ad-0574-44f9-acab-d7cec8b86788,},Annotations:map[string]string{io.kubernetes.container.hash: 3cdac550,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b360d142570dfeb8a0253680ccfd97861e90b700b1ce9e2e5b315244aed2a3b,PodSandboxId:cd560b1055b35f9f6500b5eef53e7f8250f4bc20a4f8e6562e8b30bff6235e38,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714943788394468442,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fqt45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
27bdadca-f49c-4f50-b09c-07dd6067f39a,},Annotations:map[string]string{io.kubernetes.container.hash: 8fa26f22,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63d1d40ce592576b3c3adab70629f977f025cc822b6fc2638f0afd5a8034b355,PodSandboxId:bca34597f1572fc75baa0a8f2853d06c3aa8e5aa575d8f5b15abb16332e61951,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1714943788301282386,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc212ac3-7499-4edc-b5a5-622b0bd4a891,},Annotations:map[string]string{io.kubernetes.container.hash: c207764,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57151a6a532bef4496bb9b3b51447c0e86324af7644760046c4f986f9bc000e6,PodSandboxId:1a6fd410f5e049e20c5d538edaa138d7e911b22709b53f7f82ef8a2c18cdc5c4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:171494378
6261169619,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lwtnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4033535e-69f1-426c-bb17-831fad6336d5,},Annotations:map[string]string{io.kubernetes.container.hash: 5400a271,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4da23c67204614119fd12bb11bb2dcc384609fe399a32f06e2ca61ff52a8438c,PodSandboxId:8b3a42343ade0d10dbd7caf52ae4583f67790d020c606261ab52cb7ceda9cd0a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714943786049309122,Labels:map[string]string
{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xdzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0b6492d-c0f5-45dd-8482-c447b81daa66,},Annotations:map[string]string{io.kubernetes.container.hash: 9b4684e6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abf4aae19a40108f61080f90924e47cd17198b595de08afa53c852acb001992f,PodSandboxId:f40e5905346ce4899eeb30b9f2633d4b891261ff9924be9e4b68d323a59bb10b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714943768245843963,Labels:map[string]string{i
o.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6f2a36965b5efe68085ec9ac39d075,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d73ef383ce1ab83dd2b2ada78891a4aa3835de2b4805157ac3e15584d0b4b29b,PodSandboxId:913466e1710aa2d1fa8e45f17fd08f82c114b7242bff697822c1bf5d2e0b7a3c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714943765197348186,Labels:map[string]string{io.kubernetes.container.name: kube
-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c588feae7d6204945d27bedaf4541d64,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b13d21aa2e8e79d9186e7d57f6c9dbcafdde76053f5205ee5d1bb46c65960d4f,PodSandboxId:82abab5bb480d2f5d83a105d70b76f2c29612396a2bf9abc6a4fd61fe5261c21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714943765127511026,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kuberne
tes.pod.name: kube-apiserver-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25cdcec1c37ba86157b0b42297dfe2cf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf39325,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ebcc8c1017ed40ef18eba191c6a6df12e34304aa025d65f0340c08e107ac43d,PodSandboxId:b3b0a14099e308d6e2e5766fe30514d082beb487575e5dbaa3e35778de907dd3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714943765026808480,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes
.pod.name: kube-controller-manager-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 578ccf60a9d00c195d5069c63fb0b319,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97769959b22d6e891106c90e8ae981d40cb02ac960e23bdfb007b3c50d50c923,PodSandboxId:01d81d8dc3bcbbd11e9a335f2a7aee378da1d273b127ffeb0f72b10080cb6bab,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714943765081603384,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-322980,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58f12977082107510fdbb696cd218155,},Annotations:map[string]string{io.kubernetes.container.hash: a7c6c285,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=06aca272-d736-4cd6-af74-97201485d439 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d9743f3da0de5       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   238b5b24a572e       busybox-fc5497c4f-xt9l5
	0b360d142570d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      8 minutes ago       Running             coredns                   0                   cd560b1055b35       coredns-7db6d8ff4d-fqt45
	e065fafa4b7aa       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      8 minutes ago       Running             coredns                   0                   9f56aff0e5f86       coredns-7db6d8ff4d-78zmw
	63d1d40ce5925       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago       Running             storage-provisioner       0                   bca34597f1572       storage-provisioner
	57151a6a532be       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      8 minutes ago       Running             kindnet-cni               0                   1a6fd410f5e04       kindnet-lwtnx
	4da23c6720461       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      8 minutes ago       Running             kube-proxy                0                   8b3a42343ade0       kube-proxy-8xdzd
	abf4aae19a401       ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a     8 minutes ago       Running             kube-vip                  0                   f40e5905346ce       kube-vip-ha-322980
	d73ef383ce1ab       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      8 minutes ago       Running             kube-scheduler            0                   913466e1710aa       kube-scheduler-ha-322980
	b13d21aa2e8e7       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      8 minutes ago       Running             kube-apiserver            0                   82abab5bb480d       kube-apiserver-ha-322980
	97769959b22d6       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      8 minutes ago       Running             etcd                      0                   01d81d8dc3bcb       etcd-ha-322980
	6ebcc8c1017ed       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      8 minutes ago       Running             kube-controller-manager   0                   b3b0a14099e30       kube-controller-manager-ha-322980
	
	
	==> coredns [0b360d142570dfeb8a0253680ccfd97861e90b700b1ce9e2e5b315244aed2a3b] <==
	[INFO] 10.244.1.2:40837 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.017605675s
	[INFO] 10.244.0.4:37323 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000501931s
	[INFO] 10.244.0.4:37770 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000105156s
	[INFO] 10.244.0.4:49857 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.002253399s
	[INFO] 10.244.2.2:55982 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000194151s
	[INFO] 10.244.1.2:51278 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00017965s
	[INFO] 10.244.1.2:37849 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000301689s
	[INFO] 10.244.0.4:58808 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118281s
	[INFO] 10.244.0.4:59347 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000074943s
	[INFO] 10.244.0.4:44264 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000127442s
	[INFO] 10.244.0.4:45870 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001035173s
	[INFO] 10.244.0.4:45397 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000126149s
	[INFO] 10.244.2.2:38985 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000241724s
	[INFO] 10.244.1.2:41200 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000185837s
	[INFO] 10.244.0.4:53459 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000188027s
	[INFO] 10.244.0.4:43760 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000146395s
	[INFO] 10.244.2.2:45375 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000112163s
	[INFO] 10.244.2.2:60638 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000225418s
	[INFO] 10.244.1.2:33012 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000251463s
	[INFO] 10.244.0.4:48613 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000079688s
	[INFO] 10.244.0.4:54870 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000050324s
	[INFO] 10.244.0.4:36700 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000167489s
	[INFO] 10.244.0.4:56859 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000077358s
	[INFO] 10.244.2.2:37153 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122063s
	[INFO] 10.244.2.2:43717 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000123902s
	
	
	==> coredns [e065fafa4b7aad342c5755ddbb2c69c3b4bd0efd44c9ce792388bc6c6f06121d] <==
	[INFO] 10.244.1.2:55822 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000180574s
	[INFO] 10.244.1.2:45364 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000154196s
	[INFO] 10.244.1.2:58343 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000454003s
	[INFO] 10.244.0.4:35231 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001621128s
	[INFO] 10.244.0.4:32984 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000051146s
	[INFO] 10.244.0.4:43928 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00004146s
	[INFO] 10.244.2.2:44358 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001832218s
	[INFO] 10.244.2.2:34081 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00017944s
	[INFO] 10.244.2.2:36047 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000087749s
	[INFO] 10.244.2.2:60557 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001143135s
	[INFO] 10.244.2.2:60835 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000073052s
	[INFO] 10.244.2.2:42876 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000093376s
	[INFO] 10.244.2.2:33057 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000070619s
	[INFO] 10.244.1.2:41910 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00009436s
	[INFO] 10.244.1.2:43839 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082555s
	[INFO] 10.244.1.2:39008 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000075851s
	[INFO] 10.244.0.4:47500 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000110566s
	[INFO] 10.244.0.4:44728 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071752s
	[INFO] 10.244.2.2:38205 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000222144s
	[INFO] 10.244.2.2:46321 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000164371s
	[INFO] 10.244.1.2:41080 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000205837s
	[INFO] 10.244.1.2:58822 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000264144s
	[INFO] 10.244.1.2:55995 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000174393s
	[INFO] 10.244.2.2:46471 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00069286s
	[INFO] 10.244.2.2:52414 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000163744s
	
	
	==> describe nodes <==
	Name:               ha-322980
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-322980
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=182cbbc99574885c654f8e32902368a71f76ddd3
	                    minikube.k8s.io/name=ha-322980
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_05T21_16_15_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 05 May 2024 21:16:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-322980
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 05 May 2024 21:24:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 05 May 2024 21:20:18 +0000   Sun, 05 May 2024 21:16:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 05 May 2024 21:20:18 +0000   Sun, 05 May 2024 21:16:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 05 May 2024 21:20:18 +0000   Sun, 05 May 2024 21:16:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 05 May 2024 21:20:18 +0000   Sun, 05 May 2024 21:16:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.178
	  Hostname:    ha-322980
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3a019ec328ab467ca04365748baaa367
	  System UUID:                3a019ec3-28ab-467c-a043-65748baaa367
	  Boot ID:                    c9018f9a-79b9-43c5-a307-9ae120187dfb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xt9l5              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m1s
	  kube-system                 coredns-7db6d8ff4d-78zmw             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m23s
	  kube-system                 coredns-7db6d8ff4d-fqt45             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m23s
	  kube-system                 etcd-ha-322980                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m34s
	  kube-system                 kindnet-lwtnx                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m23s
	  kube-system                 kube-apiserver-ha-322980             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m34s
	  kube-system                 kube-controller-manager-ha-322980    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m34s
	  kube-system                 kube-proxy-8xdzd                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m23s
	  kube-system                 kube-scheduler-ha-322980             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m34s
	  kube-system                 kube-vip-ha-322980                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m34s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 8m22s  kube-proxy       
	  Normal  Starting                 8m34s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m34s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m34s  kubelet          Node ha-322980 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m34s  kubelet          Node ha-322980 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m34s  kubelet          Node ha-322980 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           8m24s  node-controller  Node ha-322980 event: Registered Node ha-322980 in Controller
	  Normal  NodeReady                8m21s  kubelet          Node ha-322980 status is now: NodeReady
	  Normal  RegisteredNode           6m18s  node-controller  Node ha-322980 event: Registered Node ha-322980 in Controller
	  Normal  RegisteredNode           5m6s   node-controller  Node ha-322980 event: Registered Node ha-322980 in Controller
	
	
	Name:               ha-322980-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-322980-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=182cbbc99574885c654f8e32902368a71f76ddd3
	                    minikube.k8s.io/name=ha-322980
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_05T21_18_15_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 05 May 2024 21:18:12 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-322980-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 05 May 2024 21:21:26 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sun, 05 May 2024 21:20:15 +0000   Sun, 05 May 2024 21:22:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sun, 05 May 2024 21:20:15 +0000   Sun, 05 May 2024 21:22:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sun, 05 May 2024 21:20:15 +0000   Sun, 05 May 2024 21:22:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sun, 05 May 2024 21:20:15 +0000   Sun, 05 May 2024 21:22:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.228
	  Hostname:    ha-322980-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c5d1651406694de39b61eff245fccb61
	  System UUID:                c5d16514-0669-4de3-9b61-eff245fccb61
	  Boot ID:                    f0d34a2f-c3e3-4515-ab49-7c79a5c98854
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-tbmdd                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m1s
	  kube-system                 etcd-ha-322980-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m34s
	  kube-system                 kindnet-lmgkm                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m36s
	  kube-system                 kube-apiserver-ha-322980-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m34s
	  kube-system                 kube-controller-manager-ha-322980-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m30s
	  kube-system                 kube-proxy-wbf7q                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m36s
	  kube-system                 kube-scheduler-ha-322980-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m25s
	  kube-system                 kube-vip-ha-322980-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m31s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m36s (x8 over 6m36s)  kubelet          Node ha-322980-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m36s (x8 over 6m36s)  kubelet          Node ha-322980-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m36s (x7 over 6m36s)  kubelet          Node ha-322980-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m34s                  node-controller  Node ha-322980-m02 event: Registered Node ha-322980-m02 in Controller
	  Normal  RegisteredNode           6m18s                  node-controller  Node ha-322980-m02 event: Registered Node ha-322980-m02 in Controller
	  Normal  RegisteredNode           5m6s                   node-controller  Node ha-322980-m02 event: Registered Node ha-322980-m02 in Controller
	  Normal  NodeNotReady             2m41s                  node-controller  Node ha-322980-m02 status is now: NodeNotReady
	
	
	Name:               ha-322980-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-322980-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=182cbbc99574885c654f8e32902368a71f76ddd3
	                    minikube.k8s.io/name=ha-322980
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_05T21_19_27_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 05 May 2024 21:19:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-322980-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 05 May 2024 21:24:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 05 May 2024 21:19:53 +0000   Sun, 05 May 2024 21:19:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 05 May 2024 21:19:53 +0000   Sun, 05 May 2024 21:19:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 05 May 2024 21:19:53 +0000   Sun, 05 May 2024 21:19:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 05 May 2024 21:19:53 +0000   Sun, 05 May 2024 21:19:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.29
	  Hostname:    ha-322980-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1273ee04f2de426dbabc52e46998b0eb
	  System UUID:                1273ee04-f2de-426d-babc-52e46998b0eb
	  Boot ID:                    35fdaf53-db70-4446-a9c3-71a0744d3bea
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xz268                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m1s
	  kube-system                 etcd-ha-322980-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m23s
	  kube-system                 kindnet-ks55j                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m25s
	  kube-system                 kube-apiserver-ha-322980-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m23s
	  kube-system                 kube-controller-manager-ha-322980-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m20s
	  kube-system                 kube-proxy-nqq6b                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m25s
	  kube-system                 kube-scheduler-ha-322980-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m23s
	  kube-system                 kube-vip-ha-322980-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m20s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m25s (x8 over 5m25s)  kubelet          Node ha-322980-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m25s (x8 over 5m25s)  kubelet          Node ha-322980-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m25s (x7 over 5m25s)  kubelet          Node ha-322980-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m24s                  node-controller  Node ha-322980-m03 event: Registered Node ha-322980-m03 in Controller
	  Normal  RegisteredNode           5m23s                  node-controller  Node ha-322980-m03 event: Registered Node ha-322980-m03 in Controller
	  Normal  RegisteredNode           5m6s                   node-controller  Node ha-322980-m03 event: Registered Node ha-322980-m03 in Controller
	
	
	Name:               ha-322980-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-322980-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=182cbbc99574885c654f8e32902368a71f76ddd3
	                    minikube.k8s.io/name=ha-322980
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_05T21_20_29_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 05 May 2024 21:20:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-322980-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 05 May 2024 21:24:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 05 May 2024 21:21:06 +0000   Sun, 05 May 2024 21:20:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 05 May 2024 21:21:06 +0000   Sun, 05 May 2024 21:20:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 05 May 2024 21:21:06 +0000   Sun, 05 May 2024 21:20:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 05 May 2024 21:21:06 +0000   Sun, 05 May 2024 21:21:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.169
	  Hostname:    ha-322980-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a4c8db3356b24ba197e491501ddbfd49
	  System UUID:                a4c8db33-56b2-4ba1-97e4-91501ddbfd49
	  Boot ID:                    9ee2f344-9fdd-4182-a447-83dc5b12dc4a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-nnc4q       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m20s
	  kube-system                 kube-proxy-68cxr    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m10s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m20s (x3 over 4m21s)  kubelet          Node ha-322980-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m20s (x3 over 4m21s)  kubelet          Node ha-322980-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m20s (x3 over 4m21s)  kubelet          Node ha-322980-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m19s                  node-controller  Node ha-322980-m04 event: Registered Node ha-322980-m04 in Controller
	  Normal  RegisteredNode           4m18s                  node-controller  Node ha-322980-m04 event: Registered Node ha-322980-m04 in Controller
	  Normal  RegisteredNode           4m16s                  node-controller  Node ha-322980-m04 event: Registered Node ha-322980-m04 in Controller
	  Normal  NodeReady                3m42s                  kubelet          Node ha-322980-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[May 5 21:15] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051886] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042048] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.638371] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.482228] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.738174] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.501831] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.064246] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066779] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.227983] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.115503] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.299594] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +5.048468] systemd-fstab-generator[777]: Ignoring "noauto" option for root device
	[  +0.072016] kauditd_printk_skb: 130 callbacks suppressed
	[May 5 21:16] systemd-fstab-generator[961]: Ignoring "noauto" option for root device
	[  +0.935027] kauditd_printk_skb: 57 callbacks suppressed
	[  +9.150561] systemd-fstab-generator[1376]: Ignoring "noauto" option for root device
	[  +0.089537] kauditd_printk_skb: 40 callbacks suppressed
	[ +11.653864] kauditd_printk_skb: 21 callbacks suppressed
	[May 5 21:18] kauditd_printk_skb: 74 callbacks suppressed
	
	
	==> etcd [97769959b22d6e891106c90e8ae981d40cb02ac960e23bdfb007b3c50d50c923] <==
	{"level":"warn","ts":"2024-05-05T21:24:48.681198Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:24:48.695512Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:24:48.700449Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:24:48.721547Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:24:48.733294Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:24:48.742878Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:24:48.749923Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:24:48.755478Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:24:48.759069Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:24:48.772538Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:24:48.780034Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:24:48.786621Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:24:48.790399Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:24:48.794652Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:24:48.806009Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:24:48.81406Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:24:48.820732Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:24:48.821727Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:24:48.822988Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:24:48.827534Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:24:48.831505Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:24:48.836264Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:24:48.842338Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:24:48.848732Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:24:48.855844Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 21:24:48 up 9 min,  0 users,  load average: 0.28, 0.35, 0.19
	Linux ha-322980 5.10.207 #1 SMP Tue Apr 30 22:38:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [57151a6a532bef4496bb9b3b51447c0e86324af7644760046c4f986f9bc000e6] <==
	I0505 21:24:18.449315       1 main.go:250] Node ha-322980-m04 has CIDR [10.244.3.0/24] 
	I0505 21:24:28.457068       1 main.go:223] Handling node with IPs: map[192.168.39.178:{}]
	I0505 21:24:28.457136       1 main.go:227] handling current node
	I0505 21:24:28.457152       1 main.go:223] Handling node with IPs: map[192.168.39.228:{}]
	I0505 21:24:28.457173       1 main.go:250] Node ha-322980-m02 has CIDR [10.244.1.0/24] 
	I0505 21:24:28.457293       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0505 21:24:28.457332       1 main.go:250] Node ha-322980-m03 has CIDR [10.244.2.0/24] 
	I0505 21:24:28.457420       1 main.go:223] Handling node with IPs: map[192.168.39.169:{}]
	I0505 21:24:28.457429       1 main.go:250] Node ha-322980-m04 has CIDR [10.244.3.0/24] 
	I0505 21:24:38.474462       1 main.go:223] Handling node with IPs: map[192.168.39.178:{}]
	I0505 21:24:38.474537       1 main.go:227] handling current node
	I0505 21:24:38.474561       1 main.go:223] Handling node with IPs: map[192.168.39.228:{}]
	I0505 21:24:38.474579       1 main.go:250] Node ha-322980-m02 has CIDR [10.244.1.0/24] 
	I0505 21:24:38.474702       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0505 21:24:38.474724       1 main.go:250] Node ha-322980-m03 has CIDR [10.244.2.0/24] 
	I0505 21:24:38.474900       1 main.go:223] Handling node with IPs: map[192.168.39.169:{}]
	I0505 21:24:38.474939       1 main.go:250] Node ha-322980-m04 has CIDR [10.244.3.0/24] 
	I0505 21:24:48.489904       1 main.go:223] Handling node with IPs: map[192.168.39.178:{}]
	I0505 21:24:48.489926       1 main.go:227] handling current node
	I0505 21:24:48.489935       1 main.go:223] Handling node with IPs: map[192.168.39.228:{}]
	I0505 21:24:48.489940       1 main.go:250] Node ha-322980-m02 has CIDR [10.244.1.0/24] 
	I0505 21:24:48.490035       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0505 21:24:48.490040       1 main.go:250] Node ha-322980-m03 has CIDR [10.244.2.0/24] 
	I0505 21:24:48.490090       1 main.go:223] Handling node with IPs: map[192.168.39.169:{}]
	I0505 21:24:48.490095       1 main.go:250] Node ha-322980-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [b13d21aa2e8e79d9186e7d57f6c9dbcafdde76053f5205ee5d1bb46c65960d4f] <==
	I0505 21:16:14.456940       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0505 21:16:14.471073       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0505 21:16:25.091428       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0505 21:16:25.243573       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0505 21:19:24.348476       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0505 21:19:24.348936       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0505 21:19:24.349065       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 10.874µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0505 21:19:24.350385       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0505 21:19:24.350566       1 timeout.go:142] post-timeout activity - time-elapsed: 2.907309ms, POST "/api/v1/namespaces/kube-system/events" result: <nil>
	E0505 21:19:53.980101       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45334: use of closed network connection
	E0505 21:19:54.191103       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45360: use of closed network connection
	E0505 21:19:54.425717       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45380: use of closed network connection
	E0505 21:19:54.674501       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45404: use of closed network connection
	E0505 21:19:54.892979       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:60986: use of closed network connection
	E0505 21:19:55.095220       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32768: use of closed network connection
	E0505 21:19:55.318018       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32788: use of closed network connection
	E0505 21:19:55.521006       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32802: use of closed network connection
	E0505 21:19:55.726708       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32822: use of closed network connection
	E0505 21:19:56.050278       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32854: use of closed network connection
	E0505 21:19:56.261228       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32870: use of closed network connection
	E0505 21:19:56.472141       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32894: use of closed network connection
	E0505 21:19:56.685025       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32920: use of closed network connection
	E0505 21:19:56.916745       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32946: use of closed network connection
	E0505 21:19:57.119236       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32978: use of closed network connection
	W0505 21:21:40.559127       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.178 192.168.39.29]
	
	
	==> kube-controller-manager [6ebcc8c1017ed40ef18eba191c6a6df12e34304aa025d65f0340c08e107ac43d] <==
	I0505 21:19:23.570835       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-322980-m03" podCIDRs=["10.244.2.0/24"]
	I0505 21:19:24.568700       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-322980-m03"
	I0505 21:19:47.662623       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="93.19352ms"
	I0505 21:19:47.747354       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="84.336305ms"
	I0505 21:19:47.972892       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="225.464365ms"
	I0505 21:19:48.039165       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="66.156608ms"
	I0505 21:19:48.061198       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="21.948445ms"
	I0505 21:19:48.062605       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="114.6µs"
	I0505 21:19:48.123483       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.360242ms"
	I0505 21:19:48.124314       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="66.382µs"
	I0505 21:19:52.552250       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="66.118µs"
	I0505 21:19:52.698304       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.181575ms"
	I0505 21:19:52.700709       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="612.499µs"
	I0505 21:19:53.068287       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.874433ms"
	I0505 21:19:53.068456       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="105.906µs"
	I0505 21:19:53.481560       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.461316ms"
	I0505 21:19:53.481966       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="322.614µs"
	E0505 21:20:27.922891       1 certificate_controller.go:146] Sync csr-2rjtw failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-2rjtw": the object has been modified; please apply your changes to the latest version and try again
	I0505 21:20:28.227431       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-322980-m04\" does not exist"
	I0505 21:20:28.244528       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-322980-m04" podCIDRs=["10.244.3.0/24"]
	I0505 21:20:29.581280       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-322980-m04"
	I0505 21:21:06.985099       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-322980-m04"
	I0505 21:22:07.478633       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-322980-m04"
	I0505 21:22:07.677652       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.585128ms"
	I0505 21:22:07.678061       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="127.262µs"
	
	
	==> kube-proxy [4da23c67204614119fd12bb11bb2dcc384609fe399a32f06e2ca61ff52a8438c] <==
	I0505 21:16:26.420980       1 server_linux.go:69] "Using iptables proxy"
	I0505 21:16:26.431622       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.178"]
	I0505 21:16:26.625948       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0505 21:16:26.626022       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0505 21:16:26.626042       1 server_linux.go:165] "Using iptables Proxier"
	I0505 21:16:26.637113       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0505 21:16:26.637368       1 server.go:872] "Version info" version="v1.30.0"
	I0505 21:16:26.637407       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0505 21:16:26.638392       1 config.go:192] "Starting service config controller"
	I0505 21:16:26.638441       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0505 21:16:26.638467       1 config.go:101] "Starting endpoint slice config controller"
	I0505 21:16:26.638471       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0505 21:16:26.639227       1 config.go:319] "Starting node config controller"
	I0505 21:16:26.639264       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0505 21:16:26.739349       1 shared_informer.go:320] Caches are synced for node config
	I0505 21:16:26.739451       1 shared_informer.go:320] Caches are synced for service config
	I0505 21:16:26.739461       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [d73ef383ce1ab83dd2b2ada78891a4aa3835de2b4805157ac3e15584d0b4b29b] <==
	I0505 21:16:13.305853       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0505 21:19:47.876299       1 schedule_one.go:1069] "Error occurred" err="Pod default/busybox-fc5497c4f-p5jrm is already present in the active queue" pod="default/busybox-fc5497c4f-p5jrm"
	E0505 21:19:47.902396       1 schedule_one.go:1069] "Error occurred" err="Pod default/busybox-fc5497c4f-jsc6v is already present in the active queue" pod="default/busybox-fc5497c4f-jsc6v"
	E0505 21:20:28.319936       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-px9md\": pod kindnet-px9md is already assigned to node \"ha-322980-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-px9md" node="ha-322980-m04"
	E0505 21:20:28.321222       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod afeb8dbc-418f-484d-99aa-56a1a174965a(kube-system/kindnet-px9md) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-px9md"
	E0505 21:20:28.321301       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-px9md\": pod kindnet-px9md is already assigned to node \"ha-322980-m04\"" pod="kube-system/kindnet-px9md"
	I0505 21:20:28.321335       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-px9md" node="ha-322980-m04"
	E0505 21:20:28.320996       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-w4c7b\": pod kube-proxy-w4c7b is already assigned to node \"ha-322980-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-w4c7b" node="ha-322980-m04"
	E0505 21:20:28.326375       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 059c2bdf-8ad0-4281-b165-011150d463a6(kube-system/kube-proxy-w4c7b) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-w4c7b"
	E0505 21:20:28.326402       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-w4c7b\": pod kube-proxy-w4c7b is already assigned to node \"ha-322980-m04\"" pod="kube-system/kube-proxy-w4c7b"
	I0505 21:20:28.326454       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-w4c7b" node="ha-322980-m04"
	E0505 21:20:28.366965       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-nnc4q\": pod kindnet-nnc4q is already assigned to node \"ha-322980-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-nnc4q" node="ha-322980-m04"
	E0505 21:20:28.367080       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-nnc4q\": pod kindnet-nnc4q is already assigned to node \"ha-322980-m04\"" pod="kube-system/kindnet-nnc4q"
	E0505 21:20:28.369383       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-vmwcl\": pod kube-proxy-vmwcl is already assigned to node \"ha-322980-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-vmwcl" node="ha-322980-m04"
	E0505 21:20:28.369838       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 5169a0e2-c91d-413a-bbaa-87d14f7deb52(kube-system/kube-proxy-vmwcl) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-vmwcl"
	E0505 21:20:28.370049       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-vmwcl\": pod kube-proxy-vmwcl is already assigned to node \"ha-322980-m04\"" pod="kube-system/kube-proxy-vmwcl"
	I0505 21:20:28.370412       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-vmwcl" node="ha-322980-m04"
	E0505 21:20:28.480473       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-vnzzb\": pod kube-proxy-vnzzb is already assigned to node \"ha-322980-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-vnzzb" node="ha-322980-m04"
	E0505 21:20:28.482734       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 600688b7-2a22-48e5-88f0-1dc70996876b(kube-system/kube-proxy-vnzzb) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-vnzzb"
	E0505 21:20:28.482947       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-vnzzb\": pod kube-proxy-vnzzb is already assigned to node \"ha-322980-m04\"" pod="kube-system/kube-proxy-vnzzb"
	I0505 21:20:28.482999       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-vnzzb" node="ha-322980-m04"
	E0505 21:20:30.687013       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-tk4f6\": pod kube-proxy-tk4f6 is already assigned to node \"ha-322980-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-tk4f6" node="ha-322980-m04"
	E0505 21:20:30.687120       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod a81265bf-8396-46b9-b0f8-c8e1bf8271ee(kube-system/kube-proxy-tk4f6) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-tk4f6"
	E0505 21:20:30.687148       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-tk4f6\": pod kube-proxy-tk4f6 is already assigned to node \"ha-322980-m04\"" pod="kube-system/kube-proxy-tk4f6"
	I0505 21:20:30.687172       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-tk4f6" node="ha-322980-m04"
	
	
	==> kubelet <==
	May 05 21:20:14 ha-322980 kubelet[1385]: E0505 21:20:14.419155    1385 iptables.go:577] "Could not set up iptables canary" err=<
	May 05 21:20:14 ha-322980 kubelet[1385]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 05 21:20:14 ha-322980 kubelet[1385]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 05 21:20:14 ha-322980 kubelet[1385]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 05 21:20:14 ha-322980 kubelet[1385]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 05 21:21:14 ha-322980 kubelet[1385]: E0505 21:21:14.406480    1385 iptables.go:577] "Could not set up iptables canary" err=<
	May 05 21:21:14 ha-322980 kubelet[1385]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 05 21:21:14 ha-322980 kubelet[1385]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 05 21:21:14 ha-322980 kubelet[1385]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 05 21:21:14 ha-322980 kubelet[1385]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 05 21:22:14 ha-322980 kubelet[1385]: E0505 21:22:14.409202    1385 iptables.go:577] "Could not set up iptables canary" err=<
	May 05 21:22:14 ha-322980 kubelet[1385]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 05 21:22:14 ha-322980 kubelet[1385]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 05 21:22:14 ha-322980 kubelet[1385]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 05 21:22:14 ha-322980 kubelet[1385]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 05 21:23:14 ha-322980 kubelet[1385]: E0505 21:23:14.412106    1385 iptables.go:577] "Could not set up iptables canary" err=<
	May 05 21:23:14 ha-322980 kubelet[1385]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 05 21:23:14 ha-322980 kubelet[1385]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 05 21:23:14 ha-322980 kubelet[1385]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 05 21:23:14 ha-322980 kubelet[1385]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 05 21:24:14 ha-322980 kubelet[1385]: E0505 21:24:14.408533    1385 iptables.go:577] "Could not set up iptables canary" err=<
	May 05 21:24:14 ha-322980 kubelet[1385]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 05 21:24:14 ha-322980 kubelet[1385]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 05 21:24:14 ha-322980 kubelet[1385]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 05 21:24:14 ha-322980 kubelet[1385]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-322980 -n ha-322980
helpers_test.go:261: (dbg) Run:  kubectl --context ha-322980 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (55.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (318.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-322980 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-322980 -v=7 --alsologtostderr
E0505 21:24:59.513295   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/functional-273789/client.crt: no such file or directory
E0505 21:26:51.947149   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-322980 -v=7 --alsologtostderr: exit status 82 (2m2.726361619s)

                                                
                                                
-- stdout --
	* Stopping node "ha-322980-m04"  ...
	* Stopping node "ha-322980-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 21:24:50.409745   35937 out.go:291] Setting OutFile to fd 1 ...
	I0505 21:24:50.409871   35937 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 21:24:50.409881   35937 out.go:304] Setting ErrFile to fd 2...
	I0505 21:24:50.409886   35937 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 21:24:50.410083   35937 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18602-11466/.minikube/bin
	I0505 21:24:50.410355   35937 out.go:298] Setting JSON to false
	I0505 21:24:50.410439   35937 mustload.go:65] Loading cluster: ha-322980
	I0505 21:24:50.410799   35937 config.go:182] Loaded profile config "ha-322980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 21:24:50.410899   35937 profile.go:143] Saving config to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/config.json ...
	I0505 21:24:50.411083   35937 mustload.go:65] Loading cluster: ha-322980
	I0505 21:24:50.411260   35937 config.go:182] Loaded profile config "ha-322980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 21:24:50.411291   35937 stop.go:39] StopHost: ha-322980-m04
	I0505 21:24:50.411751   35937 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:50.411797   35937 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:50.427498   35937 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45357
	I0505 21:24:50.428033   35937 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:50.428672   35937 main.go:141] libmachine: Using API Version  1
	I0505 21:24:50.428706   35937 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:50.429156   35937 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:50.431919   35937 out.go:177] * Stopping node "ha-322980-m04"  ...
	I0505 21:24:50.433532   35937 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0505 21:24:50.433575   35937 main.go:141] libmachine: (ha-322980-m04) Calling .DriverName
	I0505 21:24:50.433829   35937 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0505 21:24:50.433858   35937 main.go:141] libmachine: (ha-322980-m04) Calling .GetSSHHostname
	I0505 21:24:50.437156   35937 main.go:141] libmachine: (ha-322980-m04) DBG | domain ha-322980-m04 has defined MAC address 52:54:00:dd:17:2b in network mk-ha-322980
	I0505 21:24:50.437653   35937 main.go:141] libmachine: (ha-322980-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:17:2b", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:20:13 +0000 UTC Type:0 Mac:52:54:00:dd:17:2b Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-322980-m04 Clientid:01:52:54:00:dd:17:2b}
	I0505 21:24:50.437680   35937 main.go:141] libmachine: (ha-322980-m04) DBG | domain ha-322980-m04 has defined IP address 192.168.39.169 and MAC address 52:54:00:dd:17:2b in network mk-ha-322980
	I0505 21:24:50.437815   35937 main.go:141] libmachine: (ha-322980-m04) Calling .GetSSHPort
	I0505 21:24:50.437999   35937 main.go:141] libmachine: (ha-322980-m04) Calling .GetSSHKeyPath
	I0505 21:24:50.438141   35937 main.go:141] libmachine: (ha-322980-m04) Calling .GetSSHUsername
	I0505 21:24:50.438280   35937 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m04/id_rsa Username:docker}
	I0505 21:24:50.527776   35937 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0505 21:24:50.582418   35937 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0505 21:24:50.639508   35937 main.go:141] libmachine: Stopping "ha-322980-m04"...
	I0505 21:24:50.639545   35937 main.go:141] libmachine: (ha-322980-m04) Calling .GetState
	I0505 21:24:50.641332   35937 main.go:141] libmachine: (ha-322980-m04) Calling .Stop
	I0505 21:24:50.644894   35937 main.go:141] libmachine: (ha-322980-m04) Waiting for machine to stop 0/120
	I0505 21:24:51.646164   35937 main.go:141] libmachine: (ha-322980-m04) Waiting for machine to stop 1/120
	I0505 21:24:52.648193   35937 main.go:141] libmachine: (ha-322980-m04) Calling .GetState
	I0505 21:24:52.649433   35937 main.go:141] libmachine: Machine "ha-322980-m04" was stopped.
	I0505 21:24:52.649482   35937 stop.go:75] duration metric: took 2.215951148s to stop
	I0505 21:24:52.649520   35937 stop.go:39] StopHost: ha-322980-m03
	I0505 21:24:52.649939   35937 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:24:52.649983   35937 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:24:52.664356   35937 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46033
	I0505 21:24:52.664742   35937 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:24:52.665200   35937 main.go:141] libmachine: Using API Version  1
	I0505 21:24:52.665225   35937 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:24:52.665503   35937 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:24:52.667724   35937 out.go:177] * Stopping node "ha-322980-m03"  ...
	I0505 21:24:52.669052   35937 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0505 21:24:52.669076   35937 main.go:141] libmachine: (ha-322980-m03) Calling .DriverName
	I0505 21:24:52.669310   35937 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0505 21:24:52.669336   35937 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHHostname
	I0505 21:24:52.671920   35937 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:24:52.672465   35937 main.go:141] libmachine: (ha-322980-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:64:b7", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:18:47 +0000 UTC Type:0 Mac:52:54:00:c6:64:b7 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ha-322980-m03 Clientid:01:52:54:00:c6:64:b7}
	I0505 21:24:52.672494   35937 main.go:141] libmachine: (ha-322980-m03) DBG | domain ha-322980-m03 has defined IP address 192.168.39.29 and MAC address 52:54:00:c6:64:b7 in network mk-ha-322980
	I0505 21:24:52.672632   35937 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHPort
	I0505 21:24:52.672808   35937 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHKeyPath
	I0505 21:24:52.672951   35937 main.go:141] libmachine: (ha-322980-m03) Calling .GetSSHUsername
	I0505 21:24:52.673068   35937 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m03/id_rsa Username:docker}
	I0505 21:24:52.762225   35937 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0505 21:24:52.821951   35937 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0505 21:24:52.877708   35937 main.go:141] libmachine: Stopping "ha-322980-m03"...
	I0505 21:24:52.877738   35937 main.go:141] libmachine: (ha-322980-m03) Calling .GetState
	I0505 21:24:52.879204   35937 main.go:141] libmachine: (ha-322980-m03) Calling .Stop
	I0505 21:24:52.882595   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 0/120
	I0505 21:24:53.884121   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 1/120
	I0505 21:24:54.885992   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 2/120
	I0505 21:24:55.887706   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 3/120
	I0505 21:24:56.889161   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 4/120
	I0505 21:24:57.891504   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 5/120
	I0505 21:24:58.893121   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 6/120
	I0505 21:24:59.894795   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 7/120
	I0505 21:25:00.896318   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 8/120
	I0505 21:25:01.897773   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 9/120
	I0505 21:25:02.899874   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 10/120
	I0505 21:25:03.901367   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 11/120
	I0505 21:25:04.902736   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 12/120
	I0505 21:25:05.904033   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 13/120
	I0505 21:25:06.905624   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 14/120
	I0505 21:25:07.907349   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 15/120
	I0505 21:25:08.908863   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 16/120
	I0505 21:25:09.910199   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 17/120
	I0505 21:25:10.911823   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 18/120
	I0505 21:25:11.913378   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 19/120
	I0505 21:25:12.916066   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 20/120
	I0505 21:25:13.917744   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 21/120
	I0505 21:25:14.919256   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 22/120
	I0505 21:25:15.921359   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 23/120
	I0505 21:25:16.922894   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 24/120
	I0505 21:25:17.925019   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 25/120
	I0505 21:25:18.926941   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 26/120
	I0505 21:25:19.928578   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 27/120
	I0505 21:25:20.930118   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 28/120
	I0505 21:25:21.931506   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 29/120
	I0505 21:25:22.933138   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 30/120
	I0505 21:25:23.934568   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 31/120
	I0505 21:25:24.936120   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 32/120
	I0505 21:25:25.937937   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 33/120
	I0505 21:25:26.939356   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 34/120
	I0505 21:25:27.940764   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 35/120
	I0505 21:25:28.942205   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 36/120
	I0505 21:25:29.943913   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 37/120
	I0505 21:25:30.945952   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 38/120
	I0505 21:25:31.947561   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 39/120
	I0505 21:25:32.949330   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 40/120
	I0505 21:25:33.950870   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 41/120
	I0505 21:25:34.952275   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 42/120
	I0505 21:25:35.953618   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 43/120
	I0505 21:25:36.955034   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 44/120
	I0505 21:25:37.956627   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 45/120
	I0505 21:25:38.958043   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 46/120
	I0505 21:25:39.959355   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 47/120
	I0505 21:25:40.960868   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 48/120
	I0505 21:25:41.962161   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 49/120
	I0505 21:25:42.963373   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 50/120
	I0505 21:25:43.964906   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 51/120
	I0505 21:25:44.966164   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 52/120
	I0505 21:25:45.968545   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 53/120
	I0505 21:25:46.970170   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 54/120
	I0505 21:25:47.972068   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 55/120
	I0505 21:25:48.974040   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 56/120
	I0505 21:25:49.975596   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 57/120
	I0505 21:25:50.977127   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 58/120
	I0505 21:25:51.978862   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 59/120
	I0505 21:25:52.980521   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 60/120
	I0505 21:25:53.982486   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 61/120
	I0505 21:25:54.983788   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 62/120
	I0505 21:25:55.986176   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 63/120
	I0505 21:25:56.987545   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 64/120
	I0505 21:25:57.988850   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 65/120
	I0505 21:25:58.990441   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 66/120
	I0505 21:25:59.991737   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 67/120
	I0505 21:26:00.993497   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 68/120
	I0505 21:26:01.994792   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 69/120
	I0505 21:26:02.996733   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 70/120
	I0505 21:26:03.998129   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 71/120
	I0505 21:26:04.999585   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 72/120
	I0505 21:26:06.001153   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 73/120
	I0505 21:26:07.002604   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 74/120
	I0505 21:26:08.004380   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 75/120
	I0505 21:26:09.006266   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 76/120
	I0505 21:26:10.007693   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 77/120
	I0505 21:26:11.010059   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 78/120
	I0505 21:26:12.011564   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 79/120
	I0505 21:26:13.013614   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 80/120
	I0505 21:26:14.015003   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 81/120
	I0505 21:26:15.016494   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 82/120
	I0505 21:26:16.017947   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 83/120
	I0505 21:26:17.019538   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 84/120
	I0505 21:26:18.021171   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 85/120
	I0505 21:26:19.022729   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 86/120
	I0505 21:26:20.024162   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 87/120
	I0505 21:26:21.025451   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 88/120
	I0505 21:26:22.026884   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 89/120
	I0505 21:26:23.028909   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 90/120
	I0505 21:26:24.030457   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 91/120
	I0505 21:26:25.031887   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 92/120
	I0505 21:26:26.033346   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 93/120
	I0505 21:26:27.034873   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 94/120
	I0505 21:26:28.036991   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 95/120
	I0505 21:26:29.038559   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 96/120
	I0505 21:26:30.039986   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 97/120
	I0505 21:26:31.041407   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 98/120
	I0505 21:26:32.042785   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 99/120
	I0505 21:26:33.045075   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 100/120
	I0505 21:26:34.046376   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 101/120
	I0505 21:26:35.048038   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 102/120
	I0505 21:26:36.049624   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 103/120
	I0505 21:26:37.050942   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 104/120
	I0505 21:26:38.052350   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 105/120
	I0505 21:26:39.053697   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 106/120
	I0505 21:26:40.055349   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 107/120
	I0505 21:26:41.056676   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 108/120
	I0505 21:26:42.058038   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 109/120
	I0505 21:26:43.060087   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 110/120
	I0505 21:26:44.061479   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 111/120
	I0505 21:26:45.063201   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 112/120
	I0505 21:26:46.064768   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 113/120
	I0505 21:26:47.066146   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 114/120
	I0505 21:26:48.067922   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 115/120
	I0505 21:26:49.069146   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 116/120
	I0505 21:26:50.070462   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 117/120
	I0505 21:26:51.071741   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 118/120
	I0505 21:26:52.073039   35937 main.go:141] libmachine: (ha-322980-m03) Waiting for machine to stop 119/120
	I0505 21:26:53.074015   35937 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0505 21:26:53.074088   35937 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0505 21:26:53.076283   35937 out.go:177] 
	W0505 21:26:53.077730   35937 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0505 21:26:53.077749   35937 out.go:239] * 
	* 
	W0505 21:26:53.080020   35937 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0505 21:26:53.082359   35937 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-322980 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-322980 --wait=true -v=7 --alsologtostderr
E0505 21:28:14.994095   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/client.crt: no such file or directory
E0505 21:29:31.829513   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/functional-273789/client.crt: no such file or directory
ha_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p ha-322980 --wait=true -v=7 --alsologtostderr: exit status 80 (3m12.735701891s)

                                                
                                                
-- stdout --
	* [ha-322980] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18602
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18602-11466/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18602-11466/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "ha-322980" primary control-plane node in "ha-322980" cluster
	* Updating the running kvm2 "ha-322980" VM ...
	* Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	* Enabled addons: 
	
	* Starting "ha-322980-m02" control-plane node in "ha-322980" cluster
	* Restarting existing kvm2 VM for "ha-322980-m02" ...
	* Found network options:
	  - NO_PROXY=192.168.39.178
	* Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	  - env NO_PROXY=192.168.39.178
	* Verifying Kubernetes components...
	
	* Starting "ha-322980-m03" control-plane node in "ha-322980" cluster
	* Restarting existing kvm2 VM for "ha-322980-m03" ...
	* Updating the running kvm2 "ha-322980-m03" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 21:26:53.140232   36399 out.go:291] Setting OutFile to fd 1 ...
	I0505 21:26:53.140470   36399 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 21:26:53.140481   36399 out.go:304] Setting ErrFile to fd 2...
	I0505 21:26:53.140485   36399 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 21:26:53.140670   36399 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18602-11466/.minikube/bin
	I0505 21:26:53.141198   36399 out.go:298] Setting JSON to false
	I0505 21:26:53.142084   36399 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4160,"bootTime":1714940253,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0505 21:26:53.142153   36399 start.go:139] virtualization: kvm guest
	I0505 21:26:53.144497   36399 out.go:177] * [ha-322980] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0505 21:26:53.146260   36399 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 21:26:53.146193   36399 notify.go:220] Checking for updates...
	I0505 21:26:53.148784   36399 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 21:26:53.150106   36399 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18602-11466/kubeconfig
	I0505 21:26:53.151383   36399 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18602-11466/.minikube
	I0505 21:26:53.152533   36399 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0505 21:26:53.153673   36399 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 21:26:53.155327   36399 config.go:182] Loaded profile config "ha-322980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 21:26:53.155445   36399 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 21:26:53.155966   36399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:26:53.156031   36399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:26:53.171200   36399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44147
	I0505 21:26:53.171619   36399 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:26:53.172129   36399 main.go:141] libmachine: Using API Version  1
	I0505 21:26:53.172150   36399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:26:53.172473   36399 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:26:53.172681   36399 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:26:53.208543   36399 out.go:177] * Using the kvm2 driver based on existing profile
	I0505 21:26:53.209967   36399 start.go:297] selected driver: kvm2
	I0505 21:26:53.209989   36399 start.go:901] validating driver "kvm2" against &{Name:ha-322980 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.0 ClusterName:ha-322980 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.178 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.29 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.169 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 21:26:53.210123   36399 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 21:26:53.210493   36399 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 21:26:53.210573   36399 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18602-11466/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0505 21:26:53.224851   36399 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0505 21:26:53.225522   36399 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0505 21:26:53.225581   36399 cni.go:84] Creating CNI manager for ""
	I0505 21:26:53.225592   36399 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0505 21:26:53.225643   36399 start.go:340] cluster config:
	{Name:ha-322980 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-322980 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.178 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.29 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.169 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 21:26:53.225764   36399 iso.go:125] acquiring lock: {Name:mk05a37c5afd8d748706c016c1665e3846f7161e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 21:26:53.228370   36399 out.go:177] * Starting "ha-322980" primary control-plane node in "ha-322980" cluster
	I0505 21:26:53.230047   36399 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0505 21:26:53.230086   36399 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18602-11466/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0505 21:26:53.230093   36399 cache.go:56] Caching tarball of preloaded images
	I0505 21:26:53.230188   36399 preload.go:173] Found /home/jenkins/minikube-integration/18602-11466/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0505 21:26:53.230200   36399 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0505 21:26:53.230314   36399 profile.go:143] Saving config to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/config.json ...
	I0505 21:26:53.230520   36399 start.go:360] acquireMachinesLock for ha-322980: {Name:mk7f653ea73351d572a4896c6b37d2e7aa44b4ac Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 21:26:53.230568   36399 start.go:364] duration metric: took 30.264µs to acquireMachinesLock for "ha-322980"
	I0505 21:26:53.230584   36399 start.go:96] Skipping create...Using existing machine configuration
	I0505 21:26:53.230594   36399 fix.go:54] fixHost starting: 
	I0505 21:26:53.230851   36399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:26:53.230880   36399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:26:53.244841   36399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39917
	I0505 21:26:53.245311   36399 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:26:53.245787   36399 main.go:141] libmachine: Using API Version  1
	I0505 21:26:53.245816   36399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:26:53.246134   36399 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:26:53.246309   36399 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:26:53.246459   36399 main.go:141] libmachine: (ha-322980) Calling .GetState
	I0505 21:26:53.248132   36399 fix.go:112] recreateIfNeeded on ha-322980: state=Running err=<nil>
	W0505 21:26:53.248160   36399 fix.go:138] unexpected machine state, will restart: <nil>
	I0505 21:26:53.251264   36399 out.go:177] * Updating the running kvm2 "ha-322980" VM ...
	I0505 21:26:53.252511   36399 machine.go:94] provisionDockerMachine start ...
	I0505 21:26:53.252536   36399 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:26:53.252737   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:26:53.255085   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:26:53.255500   36399 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:26:53.255526   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:26:53.255681   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:26:53.255852   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:26:53.256000   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:26:53.256133   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:26:53.256288   36399 main.go:141] libmachine: Using SSH client type: native
	I0505 21:26:53.256537   36399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0505 21:26:53.256551   36399 main.go:141] libmachine: About to run SSH command:
	hostname
	I0505 21:26:53.369308   36399 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-322980
	
	I0505 21:26:53.369346   36399 main.go:141] libmachine: (ha-322980) Calling .GetMachineName
	I0505 21:26:53.369606   36399 buildroot.go:166] provisioning hostname "ha-322980"
	I0505 21:26:53.369639   36399 main.go:141] libmachine: (ha-322980) Calling .GetMachineName
	I0505 21:26:53.369820   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:26:53.372637   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:26:53.373124   36399 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:26:53.373151   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:26:53.373370   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:26:53.373567   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:26:53.373735   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:26:53.373877   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:26:53.374056   36399 main.go:141] libmachine: Using SSH client type: native
	I0505 21:26:53.374277   36399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0505 21:26:53.374294   36399 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-322980 && echo "ha-322980" | sudo tee /etc/hostname
	I0505 21:26:53.506808   36399 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-322980
	
	I0505 21:26:53.506842   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:26:53.509223   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:26:53.509600   36399 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:26:53.509626   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:26:53.509814   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:26:53.509985   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:26:53.510157   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:26:53.510289   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:26:53.510416   36399 main.go:141] libmachine: Using SSH client type: native
	I0505 21:26:53.510579   36399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0505 21:26:53.510595   36399 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-322980' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-322980/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-322980' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0505 21:26:53.629485   36399 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0505 21:26:53.629511   36399 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18602-11466/.minikube CaCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18602-11466/.minikube}
	I0505 21:26:53.629528   36399 buildroot.go:174] setting up certificates
	I0505 21:26:53.629535   36399 provision.go:84] configureAuth start
	I0505 21:26:53.629551   36399 main.go:141] libmachine: (ha-322980) Calling .GetMachineName
	I0505 21:26:53.629801   36399 main.go:141] libmachine: (ha-322980) Calling .GetIP
	I0505 21:26:53.632716   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:26:53.633088   36399 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:26:53.633131   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:26:53.633288   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:26:53.635715   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:26:53.636140   36399 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:26:53.636167   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:26:53.636330   36399 provision.go:143] copyHostCerts
	I0505 21:26:53.636361   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem
	I0505 21:26:53.636406   36399 exec_runner.go:144] found /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem, removing ...
	I0505 21:26:53.636418   36399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem
	I0505 21:26:53.636502   36399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem (1078 bytes)
	I0505 21:26:53.636618   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem
	I0505 21:26:53.636644   36399 exec_runner.go:144] found /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem, removing ...
	I0505 21:26:53.636654   36399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem
	I0505 21:26:53.636691   36399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem (1123 bytes)
	I0505 21:26:53.636765   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem
	I0505 21:26:53.636795   36399 exec_runner.go:144] found /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem, removing ...
	I0505 21:26:53.636805   36399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem
	I0505 21:26:53.636837   36399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem (1675 bytes)
	I0505 21:26:53.636954   36399 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem org=jenkins.ha-322980 san=[127.0.0.1 192.168.39.178 ha-322980 localhost minikube]
	I0505 21:26:53.769238   36399 provision.go:177] copyRemoteCerts
	I0505 21:26:53.769301   36399 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0505 21:26:53.769337   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:26:53.772321   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:26:53.772662   36399 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:26:53.772698   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:26:53.772861   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:26:53.773067   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:26:53.773321   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:26:53.773466   36399 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa Username:docker}
	I0505 21:26:53.859548   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0505 21:26:53.859622   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0505 21:26:53.890248   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0505 21:26:53.890322   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0505 21:26:53.919935   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0505 21:26:53.919995   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0505 21:26:53.952579   36399 provision.go:87] duration metric: took 323.032938ms to configureAuth
	I0505 21:26:53.952610   36399 buildroot.go:189] setting minikube options for container-runtime
	I0505 21:26:53.952915   36399 config.go:182] Loaded profile config "ha-322980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 21:26:53.952991   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:26:53.955785   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:26:53.956181   36399 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:26:53.956212   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:26:53.956489   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:26:53.956663   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:26:53.956856   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:26:53.957020   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:26:53.957195   36399 main.go:141] libmachine: Using SSH client type: native
	I0505 21:26:53.957360   36399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0505 21:26:53.957381   36399 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0505 21:28:24.802156   36399 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0505 21:28:24.802179   36399 machine.go:97] duration metric: took 1m31.549649754s to provisionDockerMachine
	I0505 21:28:24.802191   36399 start.go:293] postStartSetup for "ha-322980" (driver="kvm2")
	I0505 21:28:24.802201   36399 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0505 21:28:24.802219   36399 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:28:24.802523   36399 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0505 21:28:24.802541   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:28:24.805857   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:28:24.806374   36399 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:28:24.806400   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:28:24.806574   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:28:24.806774   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:28:24.806947   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:28:24.807068   36399 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa Username:docker}
	I0505 21:28:24.897937   36399 ssh_runner.go:195] Run: cat /etc/os-release
	I0505 21:28:24.902998   36399 info.go:137] Remote host: Buildroot 2023.02.9
	I0505 21:28:24.903020   36399 filesync.go:126] Scanning /home/jenkins/minikube-integration/18602-11466/.minikube/addons for local assets ...
	I0505 21:28:24.903069   36399 filesync.go:126] Scanning /home/jenkins/minikube-integration/18602-11466/.minikube/files for local assets ...
	I0505 21:28:24.903140   36399 filesync.go:149] local asset: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem -> 187982.pem in /etc/ssl/certs
	I0505 21:28:24.903156   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem -> /etc/ssl/certs/187982.pem
	I0505 21:28:24.903230   36399 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0505 21:28:24.914976   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem --> /etc/ssl/certs/187982.pem (1708 bytes)
	I0505 21:28:24.942422   36399 start.go:296] duration metric: took 140.219842ms for postStartSetup
	I0505 21:28:24.942466   36399 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:28:24.942795   36399 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0505 21:28:24.942828   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:28:24.945241   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:28:24.945698   36399 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:28:24.945723   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:28:24.945879   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:28:24.946049   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:28:24.946187   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:28:24.946343   36399 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa Username:docker}
	W0505 21:28:25.031258   36399 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0505 21:28:25.031281   36399 fix.go:56] duration metric: took 1m31.80069046s for fixHost
	I0505 21:28:25.031302   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:28:25.033882   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:28:25.034222   36399 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:28:25.034253   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:28:25.034384   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:28:25.034608   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:28:25.034808   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:28:25.034979   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:28:25.035177   36399 main.go:141] libmachine: Using SSH client type: native
	I0505 21:28:25.035393   36399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0505 21:28:25.035405   36399 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0505 21:28:25.145055   36399 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714944505.115925429
	
	I0505 21:28:25.145080   36399 fix.go:216] guest clock: 1714944505.115925429
	I0505 21:28:25.145089   36399 fix.go:229] Guest: 2024-05-05 21:28:25.115925429 +0000 UTC Remote: 2024-05-05 21:28:25.031289392 +0000 UTC m=+91.939181071 (delta=84.636037ms)
	I0505 21:28:25.145109   36399 fix.go:200] guest clock delta is within tolerance: 84.636037ms
	I0505 21:28:25.145114   36399 start.go:83] releasing machines lock for "ha-322980", held for 1m31.914536671s
	I0505 21:28:25.145132   36399 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:28:25.145355   36399 main.go:141] libmachine: (ha-322980) Calling .GetIP
	I0505 21:28:25.147953   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:28:25.148359   36399 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:28:25.148378   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:28:25.148549   36399 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:28:25.149031   36399 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:28:25.149206   36399 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:28:25.149302   36399 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0505 21:28:25.149351   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:28:25.149450   36399 ssh_runner.go:195] Run: cat /version.json
	I0505 21:28:25.149476   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:28:25.152099   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:28:25.152175   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:28:25.152532   36399 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:28:25.152556   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:28:25.152579   36399 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:28:25.152591   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:28:25.152718   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:28:25.152853   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:28:25.152916   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:28:25.152986   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:28:25.153044   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:28:25.153100   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:28:25.153155   36399 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa Username:docker}
	I0505 21:28:25.153222   36399 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa Username:docker}
	I0505 21:28:25.262146   36399 ssh_runner.go:195] Run: systemctl --version
	I0505 21:28:25.269585   36399 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0505 21:28:25.445107   36399 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0505 21:28:25.452093   36399 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0505 21:28:25.452159   36399 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0505 21:28:25.462054   36399 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0505 21:28:25.462081   36399 start.go:494] detecting cgroup driver to use...
	I0505 21:28:25.462145   36399 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0505 21:28:25.479385   36399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0505 21:28:25.493826   36399 docker.go:217] disabling cri-docker service (if available) ...
	I0505 21:28:25.493881   36399 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0505 21:28:25.508310   36399 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0505 21:28:25.522866   36399 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0505 21:28:25.681241   36399 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0505 21:28:25.837193   36399 docker.go:233] disabling docker service ...
	I0505 21:28:25.837273   36399 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0505 21:28:25.854654   36399 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0505 21:28:25.869168   36399 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0505 21:28:26.021077   36399 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0505 21:28:26.172560   36399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0505 21:28:26.187950   36399 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0505 21:28:26.209945   36399 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0505 21:28:26.210011   36399 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:28:26.221767   36399 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0505 21:28:26.221821   36399 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:28:26.233242   36399 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:28:26.244526   36399 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:28:26.255938   36399 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0505 21:28:26.269084   36399 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:28:26.280325   36399 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:28:26.293020   36399 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:28:26.303829   36399 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0505 21:28:26.314019   36399 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0505 21:28:26.324025   36399 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 21:28:26.475013   36399 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0505 21:28:26.786010   36399 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0505 21:28:26.786082   36399 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0505 21:28:26.791904   36399 start.go:562] Will wait 60s for crictl version
	I0505 21:28:26.791958   36399 ssh_runner.go:195] Run: which crictl
	I0505 21:28:26.796301   36399 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0505 21:28:26.839834   36399 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0505 21:28:26.839910   36399 ssh_runner.go:195] Run: crio --version
	I0505 21:28:26.872417   36399 ssh_runner.go:195] Run: crio --version
	I0505 21:28:26.905097   36399 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0505 21:28:26.906534   36399 main.go:141] libmachine: (ha-322980) Calling .GetIP
	I0505 21:28:26.909264   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:28:26.909627   36399 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:28:26.909642   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:28:26.909860   36399 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0505 21:28:26.915241   36399 kubeadm.go:877] updating cluster {Name:ha-322980 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cl
usterName:ha-322980 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.178 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.29 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.169 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0505 21:28:26.915374   36399 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0505 21:28:26.915433   36399 ssh_runner.go:195] Run: sudo crictl images --output json
	I0505 21:28:26.965243   36399 crio.go:514] all images are preloaded for cri-o runtime.
	I0505 21:28:26.965271   36399 crio.go:433] Images already preloaded, skipping extraction
	I0505 21:28:26.965342   36399 ssh_runner.go:195] Run: sudo crictl images --output json
	I0505 21:28:27.008398   36399 crio.go:514] all images are preloaded for cri-o runtime.
	I0505 21:28:27.008421   36399 cache_images.go:84] Images are preloaded, skipping loading
	I0505 21:28:27.008433   36399 kubeadm.go:928] updating node { 192.168.39.178 8443 v1.30.0 crio true true} ...
	I0505 21:28:27.008545   36399 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-322980 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.178
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-322980 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0505 21:28:27.008627   36399 ssh_runner.go:195] Run: crio config
	I0505 21:28:27.062535   36399 cni.go:84] Creating CNI manager for ""
	I0505 21:28:27.062560   36399 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0505 21:28:27.062572   36399 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0505 21:28:27.062601   36399 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.178 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-322980 NodeName:ha-322980 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.178"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.178 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0505 21:28:27.062742   36399 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.178
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-322980"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.178
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.178"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0505 21:28:27.062764   36399 kube-vip.go:111] generating kube-vip config ...
	I0505 21:28:27.062801   36399 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0505 21:28:27.076515   36399 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0505 21:28:27.076654   36399 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0505 21:28:27.076721   36399 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0505 21:28:27.087275   36399 binaries.go:44] Found k8s binaries, skipping transfer
	I0505 21:28:27.087332   36399 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0505 21:28:27.097140   36399 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0505 21:28:27.115596   36399 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0505 21:28:27.133989   36399 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0505 21:28:27.152325   36399 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0505 21:28:27.171626   36399 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0505 21:28:27.176255   36399 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 21:28:27.333712   36399 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0505 21:28:27.351006   36399 certs.go:68] Setting up /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980 for IP: 192.168.39.178
	I0505 21:28:27.351031   36399 certs.go:194] generating shared ca certs ...
	I0505 21:28:27.351047   36399 certs.go:226] acquiring lock for ca certs: {Name:mk9a8f97724855697074b08194c6247f8f9e5c84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 21:28:27.351203   36399 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key
	I0505 21:28:27.351247   36399 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key
	I0505 21:28:27.351256   36399 certs.go:256] generating profile certs ...
	I0505 21:28:27.351322   36399 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/client.key
	I0505 21:28:27.351349   36399 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key.418ec019
	I0505 21:28:27.351360   36399 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt.418ec019 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.178 192.168.39.228 192.168.39.29 192.168.39.254]
	I0505 21:28:27.773033   36399 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt.418ec019 ...
	I0505 21:28:27.773068   36399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt.418ec019: {Name:mk074feb2c078ad2537bc4b0f4572ad95bc07b4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 21:28:27.773263   36399 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key.418ec019 ...
	I0505 21:28:27.773277   36399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key.418ec019: {Name:mk2665c22bdd3135504eab2bc878577f3cbff151 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 21:28:27.773371   36399 certs.go:381] copying /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt.418ec019 -> /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt
	I0505 21:28:27.773505   36399 certs.go:385] copying /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key.418ec019 -> /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key
	I0505 21:28:27.773631   36399 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.key
	I0505 21:28:27.773646   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0505 21:28:27.773658   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0505 21:28:27.773671   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0505 21:28:27.773683   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0505 21:28:27.773695   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0505 21:28:27.773707   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0505 21:28:27.773719   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0505 21:28:27.773731   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0505 21:28:27.773773   36399 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798.pem (1338 bytes)
	W0505 21:28:27.773800   36399 certs.go:480] ignoring /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798_empty.pem, impossibly tiny 0 bytes
	I0505 21:28:27.773809   36399 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem (1675 bytes)
	I0505 21:28:27.773829   36399 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem (1078 bytes)
	I0505 21:28:27.773850   36399 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem (1123 bytes)
	I0505 21:28:27.773870   36399 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem (1675 bytes)
	I0505 21:28:27.773905   36399 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem (1708 bytes)
	I0505 21:28:27.773929   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0505 21:28:27.773943   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798.pem -> /usr/share/ca-certificates/18798.pem
	I0505 21:28:27.773955   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem -> /usr/share/ca-certificates/187982.pem
	I0505 21:28:27.774493   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0505 21:28:27.804503   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0505 21:28:27.830821   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0505 21:28:27.858720   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0505 21:28:27.886328   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0505 21:28:27.912918   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0505 21:28:27.940090   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0505 21:28:27.967530   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0505 21:28:27.994650   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0505 21:28:28.022349   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798.pem --> /usr/share/ca-certificates/18798.pem (1338 bytes)
	I0505 21:28:28.049290   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem --> /usr/share/ca-certificates/187982.pem (1708 bytes)
	I0505 21:28:28.075642   36399 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0505 21:28:28.094413   36399 ssh_runner.go:195] Run: openssl version
	I0505 21:28:28.101667   36399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18798.pem && ln -fs /usr/share/ca-certificates/18798.pem /etc/ssl/certs/18798.pem"
	I0505 21:28:28.114593   36399 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18798.pem
	I0505 21:28:28.119911   36399 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  5 21:11 /usr/share/ca-certificates/18798.pem
	I0505 21:28:28.119966   36399 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18798.pem
	I0505 21:28:28.126513   36399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18798.pem /etc/ssl/certs/51391683.0"
	I0505 21:28:28.136871   36399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/187982.pem && ln -fs /usr/share/ca-certificates/187982.pem /etc/ssl/certs/187982.pem"
	I0505 21:28:28.148896   36399 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/187982.pem
	I0505 21:28:28.154099   36399 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  5 21:11 /usr/share/ca-certificates/187982.pem
	I0505 21:28:28.154153   36399 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/187982.pem
	I0505 21:28:28.160414   36399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/187982.pem /etc/ssl/certs/3ec20f2e.0"
	I0505 21:28:28.171000   36399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0505 21:28:28.184015   36399 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0505 21:28:28.189022   36399 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  5 20:58 /usr/share/ca-certificates/minikubeCA.pem
	I0505 21:28:28.189068   36399 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0505 21:28:28.196002   36399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0505 21:28:28.206271   36399 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0505 21:28:28.211552   36399 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0505 21:28:28.218198   36399 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0505 21:28:28.224606   36399 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0505 21:28:28.230931   36399 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0505 21:28:28.237169   36399 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0505 21:28:28.243293   36399 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0505 21:28:28.249553   36399 kubeadm.go:391] StartCluster: {Name:ha-322980 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clust
erName:ha-322980 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.178 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.29 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.169 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 21:28:28.249672   36399 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0505 21:28:28.249724   36399 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0505 21:28:28.296303   36399 cri.go:89] found id: "e643f88ce68e29460e940448779ea8b8b309d24d97a13d57fe0b3139f920999a"
	I0505 21:28:28.296320   36399 cri.go:89] found id: "31d5340e9679504cad0e8fc998a460f07a03ad902d57ee2dea4946953cbad32d"
	I0505 21:28:28.296324   36399 cri.go:89] found id: "e6747aa9368ee1e6895cb4bf1eed8173977dc9bddfc0ea1b03750a3d23697184"
	I0505 21:28:28.296327   36399 cri.go:89] found id: "7894a12a0cfac62f67b7770ea3e5c8dbc28723b9c7c40b415fcdcf36899ac17d"
	I0505 21:28:28.296330   36399 cri.go:89] found id: "8f325a9ea25d6ff0517a638bff175fe1f4c646916941e4d3a93f5ff6f13f0187"
	I0505 21:28:28.296333   36399 cri.go:89] found id: "0b360d142570dfeb8a0253680ccfd97861e90b700b1ce9e2e5b315244aed2a3b"
	I0505 21:28:28.296335   36399 cri.go:89] found id: "e065fafa4b7aad342c5755ddbb2c69c3b4bd0efd44c9ce792388bc6c6f06121d"
	I0505 21:28:28.296338   36399 cri.go:89] found id: "63d1d40ce592576b3c3adab70629f977f025cc822b6fc2638f0afd5a8034b355"
	I0505 21:28:28.296340   36399 cri.go:89] found id: "4da23c67204614119fd12bb11bb2dcc384609fe399a32f06e2ca61ff52a8438c"
	I0505 21:28:28.296347   36399 cri.go:89] found id: "abf4aae19a40108f61080f90924e47cd17198b595de08afa53c852acb001992f"
	I0505 21:28:28.296349   36399 cri.go:89] found id: "d73ef383ce1ab83dd2b2ada78891a4aa3835de2b4805157ac3e15584d0b4b29b"
	I0505 21:28:28.296353   36399 cri.go:89] found id: "b13d21aa2e8e79d9186e7d57f6c9dbcafdde76053f5205ee5d1bb46c65960d4f"
	I0505 21:28:28.296359   36399 cri.go:89] found id: "97769959b22d6e891106c90e8ae981d40cb02ac960e23bdfb007b3c50d50c923"
	I0505 21:28:28.296363   36399 cri.go:89] found id: "6ebcc8c1017ed40ef18eba191c6a6df12e34304aa025d65f0340c08e107ac43d"
	I0505 21:28:28.296369   36399 cri.go:89] found id: ""
	I0505 21:28:28.296419   36399 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-linux-amd64 node list -p ha-322980 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-322980
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-322980 -n ha-322980
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-322980 logs -n 25: (1.959104969s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-322980 cp ha-322980-m03:/home/docker/cp-test.txt                              | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m02:/home/docker/cp-test_ha-322980-m03_ha-322980-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n                                                                 | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n ha-322980-m02 sudo cat                                          | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | /home/docker/cp-test_ha-322980-m03_ha-322980-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-322980 cp ha-322980-m03:/home/docker/cp-test.txt                              | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m04:/home/docker/cp-test_ha-322980-m03_ha-322980-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n                                                                 | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n ha-322980-m04 sudo cat                                          | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | /home/docker/cp-test_ha-322980-m03_ha-322980-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-322980 cp testdata/cp-test.txt                                                | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n                                                                 | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-322980 cp ha-322980-m04:/home/docker/cp-test.txt                              | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3379972730/001/cp-test_ha-322980-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n                                                                 | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-322980 cp ha-322980-m04:/home/docker/cp-test.txt                              | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980:/home/docker/cp-test_ha-322980-m04_ha-322980.txt                       |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n                                                                 | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n ha-322980 sudo cat                                              | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | /home/docker/cp-test_ha-322980-m04_ha-322980.txt                                 |           |         |         |                     |                     |
	| cp      | ha-322980 cp ha-322980-m04:/home/docker/cp-test.txt                              | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m02:/home/docker/cp-test_ha-322980-m04_ha-322980-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n                                                                 | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n ha-322980-m02 sudo cat                                          | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | /home/docker/cp-test_ha-322980-m04_ha-322980-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-322980 cp ha-322980-m04:/home/docker/cp-test.txt                              | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m03:/home/docker/cp-test_ha-322980-m04_ha-322980-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n                                                                 | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n ha-322980-m03 sudo cat                                          | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | /home/docker/cp-test_ha-322980-m04_ha-322980-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-322980 node stop m02 -v=7                                                     | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-322980 node start m02 -v=7                                                    | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:23 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-322980 -v=7                                                           | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:24 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-322980 -v=7                                                                | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:24 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-322980 --wait=true -v=7                                                    | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:26 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-322980                                                                | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:30 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/05 21:26:53
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0505 21:26:53.140232   36399 out.go:291] Setting OutFile to fd 1 ...
	I0505 21:26:53.140470   36399 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 21:26:53.140481   36399 out.go:304] Setting ErrFile to fd 2...
	I0505 21:26:53.140485   36399 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 21:26:53.140670   36399 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18602-11466/.minikube/bin
	I0505 21:26:53.141198   36399 out.go:298] Setting JSON to false
	I0505 21:26:53.142084   36399 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4160,"bootTime":1714940253,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0505 21:26:53.142153   36399 start.go:139] virtualization: kvm guest
	I0505 21:26:53.144497   36399 out.go:177] * [ha-322980] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0505 21:26:53.146260   36399 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 21:26:53.146193   36399 notify.go:220] Checking for updates...
	I0505 21:26:53.148784   36399 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 21:26:53.150106   36399 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18602-11466/kubeconfig
	I0505 21:26:53.151383   36399 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18602-11466/.minikube
	I0505 21:26:53.152533   36399 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0505 21:26:53.153673   36399 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 21:26:53.155327   36399 config.go:182] Loaded profile config "ha-322980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 21:26:53.155445   36399 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 21:26:53.155966   36399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:26:53.156031   36399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:26:53.171200   36399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44147
	I0505 21:26:53.171619   36399 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:26:53.172129   36399 main.go:141] libmachine: Using API Version  1
	I0505 21:26:53.172150   36399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:26:53.172473   36399 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:26:53.172681   36399 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:26:53.208543   36399 out.go:177] * Using the kvm2 driver based on existing profile
	I0505 21:26:53.209967   36399 start.go:297] selected driver: kvm2
	I0505 21:26:53.209989   36399 start.go:901] validating driver "kvm2" against &{Name:ha-322980 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.0 ClusterName:ha-322980 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.178 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.29 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.169 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 21:26:53.210123   36399 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 21:26:53.210493   36399 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 21:26:53.210573   36399 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18602-11466/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0505 21:26:53.224851   36399 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0505 21:26:53.225522   36399 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0505 21:26:53.225581   36399 cni.go:84] Creating CNI manager for ""
	I0505 21:26:53.225592   36399 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0505 21:26:53.225643   36399 start.go:340] cluster config:
	{Name:ha-322980 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-322980 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.178 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.29 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.169 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 21:26:53.225764   36399 iso.go:125] acquiring lock: {Name:mk05a37c5afd8d748706c016c1665e3846f7161e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 21:26:53.228370   36399 out.go:177] * Starting "ha-322980" primary control-plane node in "ha-322980" cluster
	I0505 21:26:53.230047   36399 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0505 21:26:53.230086   36399 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18602-11466/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0505 21:26:53.230093   36399 cache.go:56] Caching tarball of preloaded images
	I0505 21:26:53.230188   36399 preload.go:173] Found /home/jenkins/minikube-integration/18602-11466/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0505 21:26:53.230200   36399 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0505 21:26:53.230314   36399 profile.go:143] Saving config to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/config.json ...
	I0505 21:26:53.230520   36399 start.go:360] acquireMachinesLock for ha-322980: {Name:mk7f653ea73351d572a4896c6b37d2e7aa44b4ac Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 21:26:53.230568   36399 start.go:364] duration metric: took 30.264µs to acquireMachinesLock for "ha-322980"
	I0505 21:26:53.230584   36399 start.go:96] Skipping create...Using existing machine configuration
	I0505 21:26:53.230594   36399 fix.go:54] fixHost starting: 
	I0505 21:26:53.230851   36399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:26:53.230880   36399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:26:53.244841   36399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39917
	I0505 21:26:53.245311   36399 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:26:53.245787   36399 main.go:141] libmachine: Using API Version  1
	I0505 21:26:53.245816   36399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:26:53.246134   36399 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:26:53.246309   36399 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:26:53.246459   36399 main.go:141] libmachine: (ha-322980) Calling .GetState
	I0505 21:26:53.248132   36399 fix.go:112] recreateIfNeeded on ha-322980: state=Running err=<nil>
	W0505 21:26:53.248160   36399 fix.go:138] unexpected machine state, will restart: <nil>
	I0505 21:26:53.251264   36399 out.go:177] * Updating the running kvm2 "ha-322980" VM ...
	I0505 21:26:53.252511   36399 machine.go:94] provisionDockerMachine start ...
	I0505 21:26:53.252536   36399 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:26:53.252737   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:26:53.255085   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:26:53.255500   36399 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:26:53.255526   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:26:53.255681   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:26:53.255852   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:26:53.256000   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:26:53.256133   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:26:53.256288   36399 main.go:141] libmachine: Using SSH client type: native
	I0505 21:26:53.256537   36399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0505 21:26:53.256551   36399 main.go:141] libmachine: About to run SSH command:
	hostname
	I0505 21:26:53.369308   36399 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-322980
	
	I0505 21:26:53.369346   36399 main.go:141] libmachine: (ha-322980) Calling .GetMachineName
	I0505 21:26:53.369606   36399 buildroot.go:166] provisioning hostname "ha-322980"
	I0505 21:26:53.369639   36399 main.go:141] libmachine: (ha-322980) Calling .GetMachineName
	I0505 21:26:53.369820   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:26:53.372637   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:26:53.373124   36399 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:26:53.373151   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:26:53.373370   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:26:53.373567   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:26:53.373735   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:26:53.373877   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:26:53.374056   36399 main.go:141] libmachine: Using SSH client type: native
	I0505 21:26:53.374277   36399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0505 21:26:53.374294   36399 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-322980 && echo "ha-322980" | sudo tee /etc/hostname
	I0505 21:26:53.506808   36399 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-322980
	
	I0505 21:26:53.506842   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:26:53.509223   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:26:53.509600   36399 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:26:53.509626   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:26:53.509814   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:26:53.509985   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:26:53.510157   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:26:53.510289   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:26:53.510416   36399 main.go:141] libmachine: Using SSH client type: native
	I0505 21:26:53.510579   36399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0505 21:26:53.510595   36399 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-322980' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-322980/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-322980' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0505 21:26:53.629485   36399 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0505 21:26:53.629511   36399 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18602-11466/.minikube CaCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18602-11466/.minikube}
	I0505 21:26:53.629528   36399 buildroot.go:174] setting up certificates
	I0505 21:26:53.629535   36399 provision.go:84] configureAuth start
	I0505 21:26:53.629551   36399 main.go:141] libmachine: (ha-322980) Calling .GetMachineName
	I0505 21:26:53.629801   36399 main.go:141] libmachine: (ha-322980) Calling .GetIP
	I0505 21:26:53.632716   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:26:53.633088   36399 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:26:53.633131   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:26:53.633288   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:26:53.635715   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:26:53.636140   36399 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:26:53.636167   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:26:53.636330   36399 provision.go:143] copyHostCerts
	I0505 21:26:53.636361   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem
	I0505 21:26:53.636406   36399 exec_runner.go:144] found /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem, removing ...
	I0505 21:26:53.636418   36399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem
	I0505 21:26:53.636502   36399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem (1078 bytes)
	I0505 21:26:53.636618   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem
	I0505 21:26:53.636644   36399 exec_runner.go:144] found /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem, removing ...
	I0505 21:26:53.636654   36399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem
	I0505 21:26:53.636691   36399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem (1123 bytes)
	I0505 21:26:53.636765   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem
	I0505 21:26:53.636795   36399 exec_runner.go:144] found /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem, removing ...
	I0505 21:26:53.636805   36399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem
	I0505 21:26:53.636837   36399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem (1675 bytes)
	I0505 21:26:53.636954   36399 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem org=jenkins.ha-322980 san=[127.0.0.1 192.168.39.178 ha-322980 localhost minikube]
	I0505 21:26:53.769238   36399 provision.go:177] copyRemoteCerts
	I0505 21:26:53.769301   36399 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0505 21:26:53.769337   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:26:53.772321   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:26:53.772662   36399 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:26:53.772698   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:26:53.772861   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:26:53.773067   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:26:53.773321   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:26:53.773466   36399 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa Username:docker}
	I0505 21:26:53.859548   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0505 21:26:53.859622   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0505 21:26:53.890248   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0505 21:26:53.890322   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0505 21:26:53.919935   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0505 21:26:53.919995   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0505 21:26:53.952579   36399 provision.go:87] duration metric: took 323.032938ms to configureAuth
	I0505 21:26:53.952610   36399 buildroot.go:189] setting minikube options for container-runtime
	I0505 21:26:53.952915   36399 config.go:182] Loaded profile config "ha-322980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 21:26:53.952991   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:26:53.955785   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:26:53.956181   36399 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:26:53.956212   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:26:53.956489   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:26:53.956663   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:26:53.956856   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:26:53.957020   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:26:53.957195   36399 main.go:141] libmachine: Using SSH client type: native
	I0505 21:26:53.957360   36399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0505 21:26:53.957381   36399 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0505 21:28:24.802156   36399 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0505 21:28:24.802179   36399 machine.go:97] duration metric: took 1m31.549649754s to provisionDockerMachine
	I0505 21:28:24.802191   36399 start.go:293] postStartSetup for "ha-322980" (driver="kvm2")
	I0505 21:28:24.802201   36399 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0505 21:28:24.802219   36399 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:28:24.802523   36399 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0505 21:28:24.802541   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:28:24.805857   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:28:24.806374   36399 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:28:24.806400   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:28:24.806574   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:28:24.806774   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:28:24.806947   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:28:24.807068   36399 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa Username:docker}
	I0505 21:28:24.897937   36399 ssh_runner.go:195] Run: cat /etc/os-release
	I0505 21:28:24.902998   36399 info.go:137] Remote host: Buildroot 2023.02.9
	I0505 21:28:24.903020   36399 filesync.go:126] Scanning /home/jenkins/minikube-integration/18602-11466/.minikube/addons for local assets ...
	I0505 21:28:24.903069   36399 filesync.go:126] Scanning /home/jenkins/minikube-integration/18602-11466/.minikube/files for local assets ...
	I0505 21:28:24.903140   36399 filesync.go:149] local asset: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem -> 187982.pem in /etc/ssl/certs
	I0505 21:28:24.903156   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem -> /etc/ssl/certs/187982.pem
	I0505 21:28:24.903230   36399 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0505 21:28:24.914976   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem --> /etc/ssl/certs/187982.pem (1708 bytes)
	I0505 21:28:24.942422   36399 start.go:296] duration metric: took 140.219842ms for postStartSetup
	I0505 21:28:24.942466   36399 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:28:24.942795   36399 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0505 21:28:24.942828   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:28:24.945241   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:28:24.945698   36399 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:28:24.945723   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:28:24.945879   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:28:24.946049   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:28:24.946187   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:28:24.946343   36399 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa Username:docker}
	W0505 21:28:25.031258   36399 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0505 21:28:25.031281   36399 fix.go:56] duration metric: took 1m31.80069046s for fixHost
	I0505 21:28:25.031302   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:28:25.033882   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:28:25.034222   36399 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:28:25.034253   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:28:25.034384   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:28:25.034608   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:28:25.034808   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:28:25.034979   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:28:25.035177   36399 main.go:141] libmachine: Using SSH client type: native
	I0505 21:28:25.035393   36399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0505 21:28:25.035405   36399 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0505 21:28:25.145055   36399 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714944505.115925429
	
	I0505 21:28:25.145080   36399 fix.go:216] guest clock: 1714944505.115925429
	I0505 21:28:25.145089   36399 fix.go:229] Guest: 2024-05-05 21:28:25.115925429 +0000 UTC Remote: 2024-05-05 21:28:25.031289392 +0000 UTC m=+91.939181071 (delta=84.636037ms)
	I0505 21:28:25.145109   36399 fix.go:200] guest clock delta is within tolerance: 84.636037ms
	I0505 21:28:25.145114   36399 start.go:83] releasing machines lock for "ha-322980", held for 1m31.914536671s
	I0505 21:28:25.145132   36399 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:28:25.145355   36399 main.go:141] libmachine: (ha-322980) Calling .GetIP
	I0505 21:28:25.147953   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:28:25.148359   36399 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:28:25.148378   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:28:25.148549   36399 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:28:25.149031   36399 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:28:25.149206   36399 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:28:25.149302   36399 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0505 21:28:25.149351   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:28:25.149450   36399 ssh_runner.go:195] Run: cat /version.json
	I0505 21:28:25.149476   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:28:25.152099   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:28:25.152175   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:28:25.152532   36399 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:28:25.152556   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:28:25.152579   36399 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:28:25.152591   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:28:25.152718   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:28:25.152853   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:28:25.152916   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:28:25.152986   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:28:25.153044   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:28:25.153100   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:28:25.153155   36399 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa Username:docker}
	I0505 21:28:25.153222   36399 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa Username:docker}
	I0505 21:28:25.262146   36399 ssh_runner.go:195] Run: systemctl --version
	I0505 21:28:25.269585   36399 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0505 21:28:25.445107   36399 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0505 21:28:25.452093   36399 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0505 21:28:25.452159   36399 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0505 21:28:25.462054   36399 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0505 21:28:25.462081   36399 start.go:494] detecting cgroup driver to use...
	I0505 21:28:25.462145   36399 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0505 21:28:25.479385   36399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0505 21:28:25.493826   36399 docker.go:217] disabling cri-docker service (if available) ...
	I0505 21:28:25.493881   36399 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0505 21:28:25.508310   36399 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0505 21:28:25.522866   36399 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0505 21:28:25.681241   36399 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0505 21:28:25.837193   36399 docker.go:233] disabling docker service ...
	I0505 21:28:25.837273   36399 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0505 21:28:25.854654   36399 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0505 21:28:25.869168   36399 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0505 21:28:26.021077   36399 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0505 21:28:26.172560   36399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0505 21:28:26.187950   36399 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0505 21:28:26.209945   36399 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0505 21:28:26.210011   36399 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:28:26.221767   36399 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0505 21:28:26.221821   36399 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:28:26.233242   36399 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:28:26.244526   36399 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:28:26.255938   36399 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0505 21:28:26.269084   36399 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:28:26.280325   36399 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:28:26.293020   36399 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:28:26.303829   36399 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0505 21:28:26.314019   36399 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0505 21:28:26.324025   36399 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 21:28:26.475013   36399 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0505 21:28:26.786010   36399 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0505 21:28:26.786082   36399 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0505 21:28:26.791904   36399 start.go:562] Will wait 60s for crictl version
	I0505 21:28:26.791958   36399 ssh_runner.go:195] Run: which crictl
	I0505 21:28:26.796301   36399 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0505 21:28:26.839834   36399 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0505 21:28:26.839910   36399 ssh_runner.go:195] Run: crio --version
	I0505 21:28:26.872417   36399 ssh_runner.go:195] Run: crio --version
	I0505 21:28:26.905097   36399 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0505 21:28:26.906534   36399 main.go:141] libmachine: (ha-322980) Calling .GetIP
	I0505 21:28:26.909264   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:28:26.909627   36399 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:28:26.909642   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:28:26.909860   36399 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0505 21:28:26.915241   36399 kubeadm.go:877] updating cluster {Name:ha-322980 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cl
usterName:ha-322980 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.178 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.29 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.169 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0505 21:28:26.915374   36399 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0505 21:28:26.915433   36399 ssh_runner.go:195] Run: sudo crictl images --output json
	I0505 21:28:26.965243   36399 crio.go:514] all images are preloaded for cri-o runtime.
	I0505 21:28:26.965271   36399 crio.go:433] Images already preloaded, skipping extraction
	I0505 21:28:26.965342   36399 ssh_runner.go:195] Run: sudo crictl images --output json
	I0505 21:28:27.008398   36399 crio.go:514] all images are preloaded for cri-o runtime.
	I0505 21:28:27.008421   36399 cache_images.go:84] Images are preloaded, skipping loading
	I0505 21:28:27.008433   36399 kubeadm.go:928] updating node { 192.168.39.178 8443 v1.30.0 crio true true} ...
	I0505 21:28:27.008545   36399 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-322980 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.178
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-322980 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0505 21:28:27.008627   36399 ssh_runner.go:195] Run: crio config
	I0505 21:28:27.062535   36399 cni.go:84] Creating CNI manager for ""
	I0505 21:28:27.062560   36399 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0505 21:28:27.062572   36399 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0505 21:28:27.062601   36399 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.178 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-322980 NodeName:ha-322980 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.178"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.178 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0505 21:28:27.062742   36399 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.178
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-322980"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.178
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.178"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0505 21:28:27.062764   36399 kube-vip.go:111] generating kube-vip config ...
	I0505 21:28:27.062801   36399 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0505 21:28:27.076515   36399 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0505 21:28:27.076654   36399 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0505 21:28:27.076721   36399 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0505 21:28:27.087275   36399 binaries.go:44] Found k8s binaries, skipping transfer
	I0505 21:28:27.087332   36399 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0505 21:28:27.097140   36399 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0505 21:28:27.115596   36399 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0505 21:28:27.133989   36399 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0505 21:28:27.152325   36399 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0505 21:28:27.171626   36399 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0505 21:28:27.176255   36399 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 21:28:27.333712   36399 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0505 21:28:27.351006   36399 certs.go:68] Setting up /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980 for IP: 192.168.39.178
	I0505 21:28:27.351031   36399 certs.go:194] generating shared ca certs ...
	I0505 21:28:27.351047   36399 certs.go:226] acquiring lock for ca certs: {Name:mk9a8f97724855697074b08194c6247f8f9e5c84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 21:28:27.351203   36399 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key
	I0505 21:28:27.351247   36399 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key
	I0505 21:28:27.351256   36399 certs.go:256] generating profile certs ...
	I0505 21:28:27.351322   36399 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/client.key
	I0505 21:28:27.351349   36399 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key.418ec019
	I0505 21:28:27.351360   36399 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt.418ec019 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.178 192.168.39.228 192.168.39.29 192.168.39.254]
	I0505 21:28:27.773033   36399 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt.418ec019 ...
	I0505 21:28:27.773068   36399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt.418ec019: {Name:mk074feb2c078ad2537bc4b0f4572ad95bc07b4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 21:28:27.773263   36399 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key.418ec019 ...
	I0505 21:28:27.773277   36399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key.418ec019: {Name:mk2665c22bdd3135504eab2bc878577f3cbff151 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 21:28:27.773371   36399 certs.go:381] copying /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt.418ec019 -> /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt
	I0505 21:28:27.773505   36399 certs.go:385] copying /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key.418ec019 -> /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key
	I0505 21:28:27.773631   36399 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.key
	I0505 21:28:27.773646   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0505 21:28:27.773658   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0505 21:28:27.773671   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0505 21:28:27.773683   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0505 21:28:27.773695   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0505 21:28:27.773707   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0505 21:28:27.773719   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0505 21:28:27.773731   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0505 21:28:27.773773   36399 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798.pem (1338 bytes)
	W0505 21:28:27.773800   36399 certs.go:480] ignoring /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798_empty.pem, impossibly tiny 0 bytes
	I0505 21:28:27.773809   36399 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem (1675 bytes)
	I0505 21:28:27.773829   36399 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem (1078 bytes)
	I0505 21:28:27.773850   36399 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem (1123 bytes)
	I0505 21:28:27.773870   36399 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem (1675 bytes)
	I0505 21:28:27.773905   36399 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem (1708 bytes)
	I0505 21:28:27.773929   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0505 21:28:27.773943   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798.pem -> /usr/share/ca-certificates/18798.pem
	I0505 21:28:27.773955   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem -> /usr/share/ca-certificates/187982.pem
	I0505 21:28:27.774493   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0505 21:28:27.804503   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0505 21:28:27.830821   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0505 21:28:27.858720   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0505 21:28:27.886328   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0505 21:28:27.912918   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0505 21:28:27.940090   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0505 21:28:27.967530   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0505 21:28:27.994650   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0505 21:28:28.022349   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798.pem --> /usr/share/ca-certificates/18798.pem (1338 bytes)
	I0505 21:28:28.049290   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem --> /usr/share/ca-certificates/187982.pem (1708 bytes)
	I0505 21:28:28.075642   36399 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0505 21:28:28.094413   36399 ssh_runner.go:195] Run: openssl version
	I0505 21:28:28.101667   36399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18798.pem && ln -fs /usr/share/ca-certificates/18798.pem /etc/ssl/certs/18798.pem"
	I0505 21:28:28.114593   36399 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18798.pem
	I0505 21:28:28.119911   36399 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  5 21:11 /usr/share/ca-certificates/18798.pem
	I0505 21:28:28.119966   36399 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18798.pem
	I0505 21:28:28.126513   36399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18798.pem /etc/ssl/certs/51391683.0"
	I0505 21:28:28.136871   36399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/187982.pem && ln -fs /usr/share/ca-certificates/187982.pem /etc/ssl/certs/187982.pem"
	I0505 21:28:28.148896   36399 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/187982.pem
	I0505 21:28:28.154099   36399 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  5 21:11 /usr/share/ca-certificates/187982.pem
	I0505 21:28:28.154153   36399 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/187982.pem
	I0505 21:28:28.160414   36399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/187982.pem /etc/ssl/certs/3ec20f2e.0"
	I0505 21:28:28.171000   36399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0505 21:28:28.184015   36399 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0505 21:28:28.189022   36399 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  5 20:58 /usr/share/ca-certificates/minikubeCA.pem
	I0505 21:28:28.189068   36399 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0505 21:28:28.196002   36399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0505 21:28:28.206271   36399 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0505 21:28:28.211552   36399 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0505 21:28:28.218198   36399 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0505 21:28:28.224606   36399 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0505 21:28:28.230931   36399 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0505 21:28:28.237169   36399 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0505 21:28:28.243293   36399 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0505 21:28:28.249553   36399 kubeadm.go:391] StartCluster: {Name:ha-322980 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clust
erName:ha-322980 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.178 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.29 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.169 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 21:28:28.249672   36399 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0505 21:28:28.249724   36399 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0505 21:28:28.296303   36399 cri.go:89] found id: "e643f88ce68e29460e940448779ea8b8b309d24d97a13d57fe0b3139f920999a"
	I0505 21:28:28.296320   36399 cri.go:89] found id: "31d5340e9679504cad0e8fc998a460f07a03ad902d57ee2dea4946953cbad32d"
	I0505 21:28:28.296324   36399 cri.go:89] found id: "e6747aa9368ee1e6895cb4bf1eed8173977dc9bddfc0ea1b03750a3d23697184"
	I0505 21:28:28.296327   36399 cri.go:89] found id: "7894a12a0cfac62f67b7770ea3e5c8dbc28723b9c7c40b415fcdcf36899ac17d"
	I0505 21:28:28.296330   36399 cri.go:89] found id: "8f325a9ea25d6ff0517a638bff175fe1f4c646916941e4d3a93f5ff6f13f0187"
	I0505 21:28:28.296333   36399 cri.go:89] found id: "0b360d142570dfeb8a0253680ccfd97861e90b700b1ce9e2e5b315244aed2a3b"
	I0505 21:28:28.296335   36399 cri.go:89] found id: "e065fafa4b7aad342c5755ddbb2c69c3b4bd0efd44c9ce792388bc6c6f06121d"
	I0505 21:28:28.296338   36399 cri.go:89] found id: "63d1d40ce592576b3c3adab70629f977f025cc822b6fc2638f0afd5a8034b355"
	I0505 21:28:28.296340   36399 cri.go:89] found id: "4da23c67204614119fd12bb11bb2dcc384609fe399a32f06e2ca61ff52a8438c"
	I0505 21:28:28.296347   36399 cri.go:89] found id: "abf4aae19a40108f61080f90924e47cd17198b595de08afa53c852acb001992f"
	I0505 21:28:28.296349   36399 cri.go:89] found id: "d73ef383ce1ab83dd2b2ada78891a4aa3835de2b4805157ac3e15584d0b4b29b"
	I0505 21:28:28.296353   36399 cri.go:89] found id: "b13d21aa2e8e79d9186e7d57f6c9dbcafdde76053f5205ee5d1bb46c65960d4f"
	I0505 21:28:28.296359   36399 cri.go:89] found id: "97769959b22d6e891106c90e8ae981d40cb02ac960e23bdfb007b3c50d50c923"
	I0505 21:28:28.296363   36399 cri.go:89] found id: "6ebcc8c1017ed40ef18eba191c6a6df12e34304aa025d65f0340c08e107ac43d"
	I0505 21:28:28.296369   36399 cri.go:89] found id: ""
	I0505 21:28:28.296419   36399 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	May 05 21:30:06 ha-322980 crio[3885]: time="2024-05-05 21:30:06.640868017Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714944606640835318,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d108dd2b-f9a7-40e7-898e-f8b0c7488584 name=/runtime.v1.ImageService/ImageFsInfo
	May 05 21:30:06 ha-322980 crio[3885]: time="2024-05-05 21:30:06.641447042Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7d6c7fb6-8b39-401d-a7e8-d28e40f66988 name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:30:06 ha-322980 crio[3885]: time="2024-05-05 21:30:06.641531007Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7d6c7fb6-8b39-401d-a7e8-d28e40f66988 name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:30:06 ha-322980 crio[3885]: time="2024-05-05 21:30:06.642241167Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d64f6490c58bcaabe8dcfd1f6189e1e9b6ea82598e341f43024f2a406807c0bd,PodSandboxId:68ad3ff729cb28bf9363bf24198ca022f340f699f2cf73be3068a1a5c78df7fd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714944604405013234,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc212ac3-7499-4edc-b5a5-622b0bd4a891,},Annotations:map[string]string{io.kubernetes.container.hash: c207764,io.kubernetes.container.restartCount: 4,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e5582057ffa7695636c3c49a29af71b47c3e0e49d6d4f28aed3cb84503b54e,PodSandboxId:64801e377a379e3032b60fb4e259d498edbd40bb44d06239b6567a8489f97eef,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714944583390637724,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lwtnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4033535e-69f1-426c-bb17-831fad6336d5,},Annotations:map[string]string{io.kubernetes.container.hash: 5400a271,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b48ee84cd3ceb82bcfda671b85eb4b1b2793a18718f5c445445feaf19173b3d9,PodSandboxId:95e07dfd571487a2a6cf7710a0ca46ae125d20737d36ddd7c9a44fb93a9c51a9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714944558414000718,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 578ccf60a9d00c195d5069c63fb0b319,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6a90eca6999f3420eb8cc58e5af0b595de2c1bbf36f04f08d7edea98e8cef1d,PodSandboxId:e684baf5ef11a979a8c30779f4f6f64f196ac4dbc0c9b122734952ac705ce180,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714944552394068781,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25cdcec1c37ba86157b0b42297dfe2cf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf39325,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c012cc95d188bdded0cf101970a7bcf34d1c2860b11847a187db706e95a0138,PodSandboxId:68ad3ff729cb28bf9363bf24198ca022f340f699f2cf73be3068a1a5c78df7fd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714944550398281276,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc212ac3-7499-4edc-b5a5-622b0bd4a891,},Annotations:map[string]string{io.kubernetes.container.hash: c207764,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:378349efe1d23552d4480f47eb5b8f2985a9b9afe0582e8a15ae7bd295575d7f,PodSandboxId:9dfb38e6022a77884fc7f07c6d48d0ba1ff1031b9546850ba613c1a67c6eefa0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714944545718355104,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xt9l5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bbde9685-4494-40b7-bd53-9452fd970f5a,},Annotations:map[string]string{io.kubernetes.container.hash: ce6d6b7c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,i
o.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea2d43ee9b97e09ed5e375d1d8edb724b3d836888d6a9fcd5ccb75b32e5d1424,PodSandboxId:8e6a479fdea9d5e48225f93fa886dde1f176a498a35aebb5d75e093f2ca98e92,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714944524505500895,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4b10859196db0958fa2b1c992ad5e8a,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod
: 30,},},&Container{Id:852f56752c6434b6c374dd0257366bf739e210aee07f80277d63776fd528299b,PodSandboxId:e36e99eaa4a61ccc966411e5a4efcee570acc74235b8c9da766e66844f3e432c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714944512450633114,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xdzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0b6492d-c0f5-45dd-8482-c447b81daa66,},Annotations:map[string]string{io.kubernetes.container.hash: 9b4684e6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e9a9bfcaf
10e05ee630f2bf1bb282bdfa63fde7a77a450390e8b38f43e429d4,PodSandboxId:64801e377a379e3032b60fb4e259d498edbd40bb44d06239b6567a8489f97eef,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714944512763958489,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lwtnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4033535e-69f1-426c-bb17-831fad6336d5,},Annotations:map[string]string{io.kubernetes.container.hash: 5400a271,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:067837019b5f60ac2139d714d999a7c2c585578da4dc3
279ac134cd69b882db6,PodSandboxId:d0370265c798af275a606870dfff8f094028af5afdb5a22c6825e212c9d3197f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714944512881106094,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fqt45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27bdadca-f49c-4f50-b09c-07dd6067f39a,},Annotations:map[string]string{io.kubernetes.container.hash: 8fa26f22,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06be80792a085f9dea48d4219f306149664769dd55a7c6bc24d984254df5fc7d,PodSandboxId:95e07dfd571487a2a6cf7710a0ca46ae125d20737d36ddd7c9a44fb93a9c51a9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714944512620038370,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 578ccf60a9d00c195d5069c63fb0b319,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:858ab02f25618649e61d3cb75cf0fbd9a3bfbbbd2a9556b0257647ec91b0c2b8,PodSandboxId:cd2a674999e8ab59ec8a5f057e40ce67c67724010b62a201c6abe9e71f6a3a30,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714944512601051435,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-78zmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e066e3ad-0574-44f9-acab-d7cec8b86788,},Annotations:map[string]string{io.kubernetes.container.hash: 3cdac550,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":
\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d864b4fda0bb945209c47a2404e8a012f5dac000a178c964de8c1e8cc8cb9a9f,PodSandboxId:4777f05174b29b55270c89c82d4189567c9b871410dfc1754a18984bcfbff1a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714944512361230033,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5
88feae7d6204945d27bedaf4541d64,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:366a7799ffc65796c54bbea26b2f81953df8a6b0fd8399e51c1f3cf1f374ff2a,PodSandboxId:55b2bc86d17b354ac6e14cdd0acbe830b855d4ee17bd24922dff0f39203830a4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714944512349273773,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58f12977082107510fdbb696cd218155,},Annotations:map[st
ring]string{io.kubernetes.container.hash: a7c6c285,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1ff1f42ee456adbb1ab902c56155f156eb2f298a79dc46f7d316794adc69f37,PodSandboxId:e684baf5ef11a979a8c30779f4f6f64f196ac4dbc0c9b122734952ac705ce180,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714944512314841248,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25cdcec1c37ba86157b0b42297dfe2cf,},Annotations:map[string]string{io.kuberne
tes.container.hash: cdf39325,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9743f3da0de5672bc067b03d1bf5a1bd2b516c0135ee81ae43c0d2cab9bcfdf,PodSandboxId:238b5b24a572eba24b52bf72fafa80a3a1105acc4ba0c3e58b255a5c6a8a7bc2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714943992555012726,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xt9l5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bbde9685-4494-40b7-bd53-9452fd970f5a,},Annotations:map[string]string{io.kubernete
s.container.hash: ce6d6b7c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e065fafa4b7aad342c5755ddbb2c69c3b4bd0efd44c9ce792388bc6c6f06121d,PodSandboxId:9f56aff0e5f86dabc75e935bba2e8f81a5f7d30f2613ed055fe026fab10ff612,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714943788390149653,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-78zmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e066e3ad-0574-44f9-acab-d7cec8b86788,},Annotations:map[string]string{io.kubernetes.container.hash: 3cdac550,io.ku
bernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b360d142570dfeb8a0253680ccfd97861e90b700b1ce9e2e5b315244aed2a3b,PodSandboxId:cd560b1055b35f9f6500b5eef53e7f8250f4bc20a4f8e6562e8b30bff6235e38,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714943788394548523,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-
7db6d8ff4d-fqt45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27bdadca-f49c-4f50-b09c-07dd6067f39a,},Annotations:map[string]string{io.kubernetes.container.hash: 8fa26f22,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4da23c67204614119fd12bb11bb2dcc384609fe399a32f06e2ca61ff52a8438c,PodSandboxId:8b3a42343ade0d10dbd7caf52ae4583f67790d020c606261ab52cb7ceda9cd0a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431
fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714943786049319375,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xdzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0b6492d-c0f5-45dd-8482-c447b81daa66,},Annotations:map[string]string{io.kubernetes.container.hash: 9b4684e6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d73ef383ce1ab83dd2b2ada78891a4aa3835de2b4805157ac3e15584d0b4b29b,PodSandboxId:913466e1710aa2d1fa8e45f17fd08f82c114b7242bff697822c1bf5d2e0b7a3c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8
b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714943765197403698,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c588feae7d6204945d27bedaf4541d64,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97769959b22d6e891106c90e8ae981d40cb02ac960e23bdfb007b3c50d50c923,PodSandboxId:01d81d8dc3bcbbd11e9a335f2a7aee378da1d273b127ffeb0f72b10080cb6bab,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAIN
ER_EXITED,CreatedAt:1714943765081696946,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58f12977082107510fdbb696cd218155,},Annotations:map[string]string{io.kubernetes.container.hash: a7c6c285,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7d6c7fb6-8b39-401d-a7e8-d28e40f66988 name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:30:06 ha-322980 crio[3885]: time="2024-05-05 21:30:06.696460701Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1ef8b939-343c-4b57-a5d6-2f810280e5f6 name=/runtime.v1.RuntimeService/Version
	May 05 21:30:06 ha-322980 crio[3885]: time="2024-05-05 21:30:06.696544652Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1ef8b939-343c-4b57-a5d6-2f810280e5f6 name=/runtime.v1.RuntimeService/Version
	May 05 21:30:06 ha-322980 crio[3885]: time="2024-05-05 21:30:06.698545424Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9c1c23e8-fb6d-4a3c-a55f-2706d45c55b7 name=/runtime.v1.ImageService/ImageFsInfo
	May 05 21:30:06 ha-322980 crio[3885]: time="2024-05-05 21:30:06.700152991Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714944606700122769,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9c1c23e8-fb6d-4a3c-a55f-2706d45c55b7 name=/runtime.v1.ImageService/ImageFsInfo
	May 05 21:30:06 ha-322980 crio[3885]: time="2024-05-05 21:30:06.701248471Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d1b4ece8-ecd9-4fee-b9f6-436657146eeb name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:30:06 ha-322980 crio[3885]: time="2024-05-05 21:30:06.701340816Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d1b4ece8-ecd9-4fee-b9f6-436657146eeb name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:30:06 ha-322980 crio[3885]: time="2024-05-05 21:30:06.701859706Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d64f6490c58bcaabe8dcfd1f6189e1e9b6ea82598e341f43024f2a406807c0bd,PodSandboxId:68ad3ff729cb28bf9363bf24198ca022f340f699f2cf73be3068a1a5c78df7fd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714944604405013234,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc212ac3-7499-4edc-b5a5-622b0bd4a891,},Annotations:map[string]string{io.kubernetes.container.hash: c207764,io.kubernetes.container.restartCount: 4,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e5582057ffa7695636c3c49a29af71b47c3e0e49d6d4f28aed3cb84503b54e,PodSandboxId:64801e377a379e3032b60fb4e259d498edbd40bb44d06239b6567a8489f97eef,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714944583390637724,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lwtnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4033535e-69f1-426c-bb17-831fad6336d5,},Annotations:map[string]string{io.kubernetes.container.hash: 5400a271,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b48ee84cd3ceb82bcfda671b85eb4b1b2793a18718f5c445445feaf19173b3d9,PodSandboxId:95e07dfd571487a2a6cf7710a0ca46ae125d20737d36ddd7c9a44fb93a9c51a9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714944558414000718,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 578ccf60a9d00c195d5069c63fb0b319,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6a90eca6999f3420eb8cc58e5af0b595de2c1bbf36f04f08d7edea98e8cef1d,PodSandboxId:e684baf5ef11a979a8c30779f4f6f64f196ac4dbc0c9b122734952ac705ce180,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714944552394068781,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25cdcec1c37ba86157b0b42297dfe2cf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf39325,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c012cc95d188bdded0cf101970a7bcf34d1c2860b11847a187db706e95a0138,PodSandboxId:68ad3ff729cb28bf9363bf24198ca022f340f699f2cf73be3068a1a5c78df7fd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714944550398281276,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc212ac3-7499-4edc-b5a5-622b0bd4a891,},Annotations:map[string]string{io.kubernetes.container.hash: c207764,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:378349efe1d23552d4480f47eb5b8f2985a9b9afe0582e8a15ae7bd295575d7f,PodSandboxId:9dfb38e6022a77884fc7f07c6d48d0ba1ff1031b9546850ba613c1a67c6eefa0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714944545718355104,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xt9l5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bbde9685-4494-40b7-bd53-9452fd970f5a,},Annotations:map[string]string{io.kubernetes.container.hash: ce6d6b7c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,i
o.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea2d43ee9b97e09ed5e375d1d8edb724b3d836888d6a9fcd5ccb75b32e5d1424,PodSandboxId:8e6a479fdea9d5e48225f93fa886dde1f176a498a35aebb5d75e093f2ca98e92,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714944524505500895,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4b10859196db0958fa2b1c992ad5e8a,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod
: 30,},},&Container{Id:852f56752c6434b6c374dd0257366bf739e210aee07f80277d63776fd528299b,PodSandboxId:e36e99eaa4a61ccc966411e5a4efcee570acc74235b8c9da766e66844f3e432c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714944512450633114,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xdzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0b6492d-c0f5-45dd-8482-c447b81daa66,},Annotations:map[string]string{io.kubernetes.container.hash: 9b4684e6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e9a9bfcaf
10e05ee630f2bf1bb282bdfa63fde7a77a450390e8b38f43e429d4,PodSandboxId:64801e377a379e3032b60fb4e259d498edbd40bb44d06239b6567a8489f97eef,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714944512763958489,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lwtnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4033535e-69f1-426c-bb17-831fad6336d5,},Annotations:map[string]string{io.kubernetes.container.hash: 5400a271,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:067837019b5f60ac2139d714d999a7c2c585578da4dc3
279ac134cd69b882db6,PodSandboxId:d0370265c798af275a606870dfff8f094028af5afdb5a22c6825e212c9d3197f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714944512881106094,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fqt45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27bdadca-f49c-4f50-b09c-07dd6067f39a,},Annotations:map[string]string{io.kubernetes.container.hash: 8fa26f22,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06be80792a085f9dea48d4219f306149664769dd55a7c6bc24d984254df5fc7d,PodSandboxId:95e07dfd571487a2a6cf7710a0ca46ae125d20737d36ddd7c9a44fb93a9c51a9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714944512620038370,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 578ccf60a9d00c195d5069c63fb0b319,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:858ab02f25618649e61d3cb75cf0fbd9a3bfbbbd2a9556b0257647ec91b0c2b8,PodSandboxId:cd2a674999e8ab59ec8a5f057e40ce67c67724010b62a201c6abe9e71f6a3a30,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714944512601051435,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-78zmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e066e3ad-0574-44f9-acab-d7cec8b86788,},Annotations:map[string]string{io.kubernetes.container.hash: 3cdac550,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":
\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d864b4fda0bb945209c47a2404e8a012f5dac000a178c964de8c1e8cc8cb9a9f,PodSandboxId:4777f05174b29b55270c89c82d4189567c9b871410dfc1754a18984bcfbff1a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714944512361230033,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5
88feae7d6204945d27bedaf4541d64,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:366a7799ffc65796c54bbea26b2f81953df8a6b0fd8399e51c1f3cf1f374ff2a,PodSandboxId:55b2bc86d17b354ac6e14cdd0acbe830b855d4ee17bd24922dff0f39203830a4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714944512349273773,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58f12977082107510fdbb696cd218155,},Annotations:map[st
ring]string{io.kubernetes.container.hash: a7c6c285,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1ff1f42ee456adbb1ab902c56155f156eb2f298a79dc46f7d316794adc69f37,PodSandboxId:e684baf5ef11a979a8c30779f4f6f64f196ac4dbc0c9b122734952ac705ce180,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714944512314841248,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25cdcec1c37ba86157b0b42297dfe2cf,},Annotations:map[string]string{io.kuberne
tes.container.hash: cdf39325,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9743f3da0de5672bc067b03d1bf5a1bd2b516c0135ee81ae43c0d2cab9bcfdf,PodSandboxId:238b5b24a572eba24b52bf72fafa80a3a1105acc4ba0c3e58b255a5c6a8a7bc2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714943992555012726,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xt9l5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bbde9685-4494-40b7-bd53-9452fd970f5a,},Annotations:map[string]string{io.kubernete
s.container.hash: ce6d6b7c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e065fafa4b7aad342c5755ddbb2c69c3b4bd0efd44c9ce792388bc6c6f06121d,PodSandboxId:9f56aff0e5f86dabc75e935bba2e8f81a5f7d30f2613ed055fe026fab10ff612,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714943788390149653,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-78zmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e066e3ad-0574-44f9-acab-d7cec8b86788,},Annotations:map[string]string{io.kubernetes.container.hash: 3cdac550,io.ku
bernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b360d142570dfeb8a0253680ccfd97861e90b700b1ce9e2e5b315244aed2a3b,PodSandboxId:cd560b1055b35f9f6500b5eef53e7f8250f4bc20a4f8e6562e8b30bff6235e38,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714943788394548523,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-
7db6d8ff4d-fqt45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27bdadca-f49c-4f50-b09c-07dd6067f39a,},Annotations:map[string]string{io.kubernetes.container.hash: 8fa26f22,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4da23c67204614119fd12bb11bb2dcc384609fe399a32f06e2ca61ff52a8438c,PodSandboxId:8b3a42343ade0d10dbd7caf52ae4583f67790d020c606261ab52cb7ceda9cd0a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431
fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714943786049319375,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xdzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0b6492d-c0f5-45dd-8482-c447b81daa66,},Annotations:map[string]string{io.kubernetes.container.hash: 9b4684e6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d73ef383ce1ab83dd2b2ada78891a4aa3835de2b4805157ac3e15584d0b4b29b,PodSandboxId:913466e1710aa2d1fa8e45f17fd08f82c114b7242bff697822c1bf5d2e0b7a3c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8
b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714943765197403698,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c588feae7d6204945d27bedaf4541d64,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97769959b22d6e891106c90e8ae981d40cb02ac960e23bdfb007b3c50d50c923,PodSandboxId:01d81d8dc3bcbbd11e9a335f2a7aee378da1d273b127ffeb0f72b10080cb6bab,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAIN
ER_EXITED,CreatedAt:1714943765081696946,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58f12977082107510fdbb696cd218155,},Annotations:map[string]string{io.kubernetes.container.hash: a7c6c285,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d1b4ece8-ecd9-4fee-b9f6-436657146eeb name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:30:06 ha-322980 crio[3885]: time="2024-05-05 21:30:06.752848320Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7372288e-2839-4e8e-b28d-f642a2ba7a47 name=/runtime.v1.RuntimeService/Version
	May 05 21:30:06 ha-322980 crio[3885]: time="2024-05-05 21:30:06.752924887Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7372288e-2839-4e8e-b28d-f642a2ba7a47 name=/runtime.v1.RuntimeService/Version
	May 05 21:30:06 ha-322980 crio[3885]: time="2024-05-05 21:30:06.756047661Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6afe0ca6-5fab-438f-a2c6-e1496db14ec4 name=/runtime.v1.ImageService/ImageFsInfo
	May 05 21:30:06 ha-322980 crio[3885]: time="2024-05-05 21:30:06.756905860Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714944606756457935,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6afe0ca6-5fab-438f-a2c6-e1496db14ec4 name=/runtime.v1.ImageService/ImageFsInfo
	May 05 21:30:06 ha-322980 crio[3885]: time="2024-05-05 21:30:06.765852785Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ae1458f0-850a-4463-a650-975f5d49ecad name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:30:06 ha-322980 crio[3885]: time="2024-05-05 21:30:06.765938087Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ae1458f0-850a-4463-a650-975f5d49ecad name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:30:06 ha-322980 crio[3885]: time="2024-05-05 21:30:06.766327674Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d64f6490c58bcaabe8dcfd1f6189e1e9b6ea82598e341f43024f2a406807c0bd,PodSandboxId:68ad3ff729cb28bf9363bf24198ca022f340f699f2cf73be3068a1a5c78df7fd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714944604405013234,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc212ac3-7499-4edc-b5a5-622b0bd4a891,},Annotations:map[string]string{io.kubernetes.container.hash: c207764,io.kubernetes.container.restartCount: 4,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e5582057ffa7695636c3c49a29af71b47c3e0e49d6d4f28aed3cb84503b54e,PodSandboxId:64801e377a379e3032b60fb4e259d498edbd40bb44d06239b6567a8489f97eef,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714944583390637724,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lwtnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4033535e-69f1-426c-bb17-831fad6336d5,},Annotations:map[string]string{io.kubernetes.container.hash: 5400a271,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b48ee84cd3ceb82bcfda671b85eb4b1b2793a18718f5c445445feaf19173b3d9,PodSandboxId:95e07dfd571487a2a6cf7710a0ca46ae125d20737d36ddd7c9a44fb93a9c51a9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714944558414000718,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 578ccf60a9d00c195d5069c63fb0b319,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6a90eca6999f3420eb8cc58e5af0b595de2c1bbf36f04f08d7edea98e8cef1d,PodSandboxId:e684baf5ef11a979a8c30779f4f6f64f196ac4dbc0c9b122734952ac705ce180,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714944552394068781,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25cdcec1c37ba86157b0b42297dfe2cf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf39325,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c012cc95d188bdded0cf101970a7bcf34d1c2860b11847a187db706e95a0138,PodSandboxId:68ad3ff729cb28bf9363bf24198ca022f340f699f2cf73be3068a1a5c78df7fd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714944550398281276,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc212ac3-7499-4edc-b5a5-622b0bd4a891,},Annotations:map[string]string{io.kubernetes.container.hash: c207764,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:378349efe1d23552d4480f47eb5b8f2985a9b9afe0582e8a15ae7bd295575d7f,PodSandboxId:9dfb38e6022a77884fc7f07c6d48d0ba1ff1031b9546850ba613c1a67c6eefa0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714944545718355104,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xt9l5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bbde9685-4494-40b7-bd53-9452fd970f5a,},Annotations:map[string]string{io.kubernetes.container.hash: ce6d6b7c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,i
o.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea2d43ee9b97e09ed5e375d1d8edb724b3d836888d6a9fcd5ccb75b32e5d1424,PodSandboxId:8e6a479fdea9d5e48225f93fa886dde1f176a498a35aebb5d75e093f2ca98e92,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714944524505500895,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4b10859196db0958fa2b1c992ad5e8a,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod
: 30,},},&Container{Id:852f56752c6434b6c374dd0257366bf739e210aee07f80277d63776fd528299b,PodSandboxId:e36e99eaa4a61ccc966411e5a4efcee570acc74235b8c9da766e66844f3e432c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714944512450633114,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xdzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0b6492d-c0f5-45dd-8482-c447b81daa66,},Annotations:map[string]string{io.kubernetes.container.hash: 9b4684e6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e9a9bfcaf
10e05ee630f2bf1bb282bdfa63fde7a77a450390e8b38f43e429d4,PodSandboxId:64801e377a379e3032b60fb4e259d498edbd40bb44d06239b6567a8489f97eef,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714944512763958489,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lwtnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4033535e-69f1-426c-bb17-831fad6336d5,},Annotations:map[string]string{io.kubernetes.container.hash: 5400a271,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:067837019b5f60ac2139d714d999a7c2c585578da4dc3
279ac134cd69b882db6,PodSandboxId:d0370265c798af275a606870dfff8f094028af5afdb5a22c6825e212c9d3197f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714944512881106094,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fqt45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27bdadca-f49c-4f50-b09c-07dd6067f39a,},Annotations:map[string]string{io.kubernetes.container.hash: 8fa26f22,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06be80792a085f9dea48d4219f306149664769dd55a7c6bc24d984254df5fc7d,PodSandboxId:95e07dfd571487a2a6cf7710a0ca46ae125d20737d36ddd7c9a44fb93a9c51a9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714944512620038370,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 578ccf60a9d00c195d5069c63fb0b319,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:858ab02f25618649e61d3cb75cf0fbd9a3bfbbbd2a9556b0257647ec91b0c2b8,PodSandboxId:cd2a674999e8ab59ec8a5f057e40ce67c67724010b62a201c6abe9e71f6a3a30,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714944512601051435,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-78zmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e066e3ad-0574-44f9-acab-d7cec8b86788,},Annotations:map[string]string{io.kubernetes.container.hash: 3cdac550,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":
\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d864b4fda0bb945209c47a2404e8a012f5dac000a178c964de8c1e8cc8cb9a9f,PodSandboxId:4777f05174b29b55270c89c82d4189567c9b871410dfc1754a18984bcfbff1a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714944512361230033,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5
88feae7d6204945d27bedaf4541d64,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:366a7799ffc65796c54bbea26b2f81953df8a6b0fd8399e51c1f3cf1f374ff2a,PodSandboxId:55b2bc86d17b354ac6e14cdd0acbe830b855d4ee17bd24922dff0f39203830a4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714944512349273773,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58f12977082107510fdbb696cd218155,},Annotations:map[st
ring]string{io.kubernetes.container.hash: a7c6c285,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1ff1f42ee456adbb1ab902c56155f156eb2f298a79dc46f7d316794adc69f37,PodSandboxId:e684baf5ef11a979a8c30779f4f6f64f196ac4dbc0c9b122734952ac705ce180,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714944512314841248,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25cdcec1c37ba86157b0b42297dfe2cf,},Annotations:map[string]string{io.kuberne
tes.container.hash: cdf39325,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9743f3da0de5672bc067b03d1bf5a1bd2b516c0135ee81ae43c0d2cab9bcfdf,PodSandboxId:238b5b24a572eba24b52bf72fafa80a3a1105acc4ba0c3e58b255a5c6a8a7bc2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714943992555012726,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xt9l5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bbde9685-4494-40b7-bd53-9452fd970f5a,},Annotations:map[string]string{io.kubernete
s.container.hash: ce6d6b7c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e065fafa4b7aad342c5755ddbb2c69c3b4bd0efd44c9ce792388bc6c6f06121d,PodSandboxId:9f56aff0e5f86dabc75e935bba2e8f81a5f7d30f2613ed055fe026fab10ff612,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714943788390149653,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-78zmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e066e3ad-0574-44f9-acab-d7cec8b86788,},Annotations:map[string]string{io.kubernetes.container.hash: 3cdac550,io.ku
bernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b360d142570dfeb8a0253680ccfd97861e90b700b1ce9e2e5b315244aed2a3b,PodSandboxId:cd560b1055b35f9f6500b5eef53e7f8250f4bc20a4f8e6562e8b30bff6235e38,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714943788394548523,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-
7db6d8ff4d-fqt45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27bdadca-f49c-4f50-b09c-07dd6067f39a,},Annotations:map[string]string{io.kubernetes.container.hash: 8fa26f22,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4da23c67204614119fd12bb11bb2dcc384609fe399a32f06e2ca61ff52a8438c,PodSandboxId:8b3a42343ade0d10dbd7caf52ae4583f67790d020c606261ab52cb7ceda9cd0a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431
fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714943786049319375,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xdzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0b6492d-c0f5-45dd-8482-c447b81daa66,},Annotations:map[string]string{io.kubernetes.container.hash: 9b4684e6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d73ef383ce1ab83dd2b2ada78891a4aa3835de2b4805157ac3e15584d0b4b29b,PodSandboxId:913466e1710aa2d1fa8e45f17fd08f82c114b7242bff697822c1bf5d2e0b7a3c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8
b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714943765197403698,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c588feae7d6204945d27bedaf4541d64,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97769959b22d6e891106c90e8ae981d40cb02ac960e23bdfb007b3c50d50c923,PodSandboxId:01d81d8dc3bcbbd11e9a335f2a7aee378da1d273b127ffeb0f72b10080cb6bab,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAIN
ER_EXITED,CreatedAt:1714943765081696946,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58f12977082107510fdbb696cd218155,},Annotations:map[string]string{io.kubernetes.container.hash: a7c6c285,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ae1458f0-850a-4463-a650-975f5d49ecad name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:30:06 ha-322980 crio[3885]: time="2024-05-05 21:30:06.820198181Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=76657e43-cb3e-4a53-a751-196a76af9c93 name=/runtime.v1.RuntimeService/Version
	May 05 21:30:06 ha-322980 crio[3885]: time="2024-05-05 21:30:06.820332205Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=76657e43-cb3e-4a53-a751-196a76af9c93 name=/runtime.v1.RuntimeService/Version
	May 05 21:30:06 ha-322980 crio[3885]: time="2024-05-05 21:30:06.821541503Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5e20f6e4-bbec-4dd2-8941-4962193d368e name=/runtime.v1.ImageService/ImageFsInfo
	May 05 21:30:06 ha-322980 crio[3885]: time="2024-05-05 21:30:06.822201924Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714944606822168130,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5e20f6e4-bbec-4dd2-8941-4962193d368e name=/runtime.v1.ImageService/ImageFsInfo
	May 05 21:30:06 ha-322980 crio[3885]: time="2024-05-05 21:30:06.822842069Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8a4a0328-f0f0-4d03-8ca4-438f77fe3c9c name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:30:06 ha-322980 crio[3885]: time="2024-05-05 21:30:06.822929202Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8a4a0328-f0f0-4d03-8ca4-438f77fe3c9c name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:30:06 ha-322980 crio[3885]: time="2024-05-05 21:30:06.823396249Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d64f6490c58bcaabe8dcfd1f6189e1e9b6ea82598e341f43024f2a406807c0bd,PodSandboxId:68ad3ff729cb28bf9363bf24198ca022f340f699f2cf73be3068a1a5c78df7fd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714944604405013234,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc212ac3-7499-4edc-b5a5-622b0bd4a891,},Annotations:map[string]string{io.kubernetes.container.hash: c207764,io.kubernetes.container.restartCount: 4,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e5582057ffa7695636c3c49a29af71b47c3e0e49d6d4f28aed3cb84503b54e,PodSandboxId:64801e377a379e3032b60fb4e259d498edbd40bb44d06239b6567a8489f97eef,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714944583390637724,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lwtnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4033535e-69f1-426c-bb17-831fad6336d5,},Annotations:map[string]string{io.kubernetes.container.hash: 5400a271,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b48ee84cd3ceb82bcfda671b85eb4b1b2793a18718f5c445445feaf19173b3d9,PodSandboxId:95e07dfd571487a2a6cf7710a0ca46ae125d20737d36ddd7c9a44fb93a9c51a9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714944558414000718,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 578ccf60a9d00c195d5069c63fb0b319,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6a90eca6999f3420eb8cc58e5af0b595de2c1bbf36f04f08d7edea98e8cef1d,PodSandboxId:e684baf5ef11a979a8c30779f4f6f64f196ac4dbc0c9b122734952ac705ce180,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714944552394068781,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25cdcec1c37ba86157b0b42297dfe2cf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf39325,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c012cc95d188bdded0cf101970a7bcf34d1c2860b11847a187db706e95a0138,PodSandboxId:68ad3ff729cb28bf9363bf24198ca022f340f699f2cf73be3068a1a5c78df7fd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714944550398281276,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc212ac3-7499-4edc-b5a5-622b0bd4a891,},Annotations:map[string]string{io.kubernetes.container.hash: c207764,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:378349efe1d23552d4480f47eb5b8f2985a9b9afe0582e8a15ae7bd295575d7f,PodSandboxId:9dfb38e6022a77884fc7f07c6d48d0ba1ff1031b9546850ba613c1a67c6eefa0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714944545718355104,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xt9l5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bbde9685-4494-40b7-bd53-9452fd970f5a,},Annotations:map[string]string{io.kubernetes.container.hash: ce6d6b7c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,i
o.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea2d43ee9b97e09ed5e375d1d8edb724b3d836888d6a9fcd5ccb75b32e5d1424,PodSandboxId:8e6a479fdea9d5e48225f93fa886dde1f176a498a35aebb5d75e093f2ca98e92,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714944524505500895,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4b10859196db0958fa2b1c992ad5e8a,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod
: 30,},},&Container{Id:852f56752c6434b6c374dd0257366bf739e210aee07f80277d63776fd528299b,PodSandboxId:e36e99eaa4a61ccc966411e5a4efcee570acc74235b8c9da766e66844f3e432c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714944512450633114,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xdzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0b6492d-c0f5-45dd-8482-c447b81daa66,},Annotations:map[string]string{io.kubernetes.container.hash: 9b4684e6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e9a9bfcaf
10e05ee630f2bf1bb282bdfa63fde7a77a450390e8b38f43e429d4,PodSandboxId:64801e377a379e3032b60fb4e259d498edbd40bb44d06239b6567a8489f97eef,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714944512763958489,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lwtnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4033535e-69f1-426c-bb17-831fad6336d5,},Annotations:map[string]string{io.kubernetes.container.hash: 5400a271,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:067837019b5f60ac2139d714d999a7c2c585578da4dc3
279ac134cd69b882db6,PodSandboxId:d0370265c798af275a606870dfff8f094028af5afdb5a22c6825e212c9d3197f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714944512881106094,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fqt45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27bdadca-f49c-4f50-b09c-07dd6067f39a,},Annotations:map[string]string{io.kubernetes.container.hash: 8fa26f22,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06be80792a085f9dea48d4219f306149664769dd55a7c6bc24d984254df5fc7d,PodSandboxId:95e07dfd571487a2a6cf7710a0ca46ae125d20737d36ddd7c9a44fb93a9c51a9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714944512620038370,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 578ccf60a9d00c195d5069c63fb0b319,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:858ab02f25618649e61d3cb75cf0fbd9a3bfbbbd2a9556b0257647ec91b0c2b8,PodSandboxId:cd2a674999e8ab59ec8a5f057e40ce67c67724010b62a201c6abe9e71f6a3a30,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714944512601051435,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-78zmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e066e3ad-0574-44f9-acab-d7cec8b86788,},Annotations:map[string]string{io.kubernetes.container.hash: 3cdac550,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":
\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d864b4fda0bb945209c47a2404e8a012f5dac000a178c964de8c1e8cc8cb9a9f,PodSandboxId:4777f05174b29b55270c89c82d4189567c9b871410dfc1754a18984bcfbff1a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714944512361230033,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5
88feae7d6204945d27bedaf4541d64,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:366a7799ffc65796c54bbea26b2f81953df8a6b0fd8399e51c1f3cf1f374ff2a,PodSandboxId:55b2bc86d17b354ac6e14cdd0acbe830b855d4ee17bd24922dff0f39203830a4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714944512349273773,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58f12977082107510fdbb696cd218155,},Annotations:map[st
ring]string{io.kubernetes.container.hash: a7c6c285,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1ff1f42ee456adbb1ab902c56155f156eb2f298a79dc46f7d316794adc69f37,PodSandboxId:e684baf5ef11a979a8c30779f4f6f64f196ac4dbc0c9b122734952ac705ce180,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714944512314841248,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25cdcec1c37ba86157b0b42297dfe2cf,},Annotations:map[string]string{io.kuberne
tes.container.hash: cdf39325,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9743f3da0de5672bc067b03d1bf5a1bd2b516c0135ee81ae43c0d2cab9bcfdf,PodSandboxId:238b5b24a572eba24b52bf72fafa80a3a1105acc4ba0c3e58b255a5c6a8a7bc2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714943992555012726,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xt9l5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bbde9685-4494-40b7-bd53-9452fd970f5a,},Annotations:map[string]string{io.kubernete
s.container.hash: ce6d6b7c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e065fafa4b7aad342c5755ddbb2c69c3b4bd0efd44c9ce792388bc6c6f06121d,PodSandboxId:9f56aff0e5f86dabc75e935bba2e8f81a5f7d30f2613ed055fe026fab10ff612,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714943788390149653,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-78zmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e066e3ad-0574-44f9-acab-d7cec8b86788,},Annotations:map[string]string{io.kubernetes.container.hash: 3cdac550,io.ku
bernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b360d142570dfeb8a0253680ccfd97861e90b700b1ce9e2e5b315244aed2a3b,PodSandboxId:cd560b1055b35f9f6500b5eef53e7f8250f4bc20a4f8e6562e8b30bff6235e38,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714943788394548523,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-
7db6d8ff4d-fqt45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27bdadca-f49c-4f50-b09c-07dd6067f39a,},Annotations:map[string]string{io.kubernetes.container.hash: 8fa26f22,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4da23c67204614119fd12bb11bb2dcc384609fe399a32f06e2ca61ff52a8438c,PodSandboxId:8b3a42343ade0d10dbd7caf52ae4583f67790d020c606261ab52cb7ceda9cd0a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431
fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714943786049319375,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xdzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0b6492d-c0f5-45dd-8482-c447b81daa66,},Annotations:map[string]string{io.kubernetes.container.hash: 9b4684e6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d73ef383ce1ab83dd2b2ada78891a4aa3835de2b4805157ac3e15584d0b4b29b,PodSandboxId:913466e1710aa2d1fa8e45f17fd08f82c114b7242bff697822c1bf5d2e0b7a3c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8
b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714943765197403698,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c588feae7d6204945d27bedaf4541d64,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97769959b22d6e891106c90e8ae981d40cb02ac960e23bdfb007b3c50d50c923,PodSandboxId:01d81d8dc3bcbbd11e9a335f2a7aee378da1d273b127ffeb0f72b10080cb6bab,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAIN
ER_EXITED,CreatedAt:1714943765081696946,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58f12977082107510fdbb696cd218155,},Annotations:map[string]string{io.kubernetes.container.hash: a7c6c285,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8a4a0328-f0f0-4d03-8ca4-438f77fe3c9c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	d64f6490c58bc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 seconds ago        Running             storage-provisioner       4                   68ad3ff729cb2       storage-provisioner
	d8e5582057ffa       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      23 seconds ago       Running             kindnet-cni               4                   64801e377a379       kindnet-lwtnx
	b48ee84cd3ceb       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      48 seconds ago       Running             kube-controller-manager   2                   95e07dfd57148       kube-controller-manager-ha-322980
	a6a90eca6999f       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      54 seconds ago       Running             kube-apiserver            3                   e684baf5ef11a       kube-apiserver-ha-322980
	0c012cc95d188       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      56 seconds ago       Exited              storage-provisioner       3                   68ad3ff729cb2       storage-provisioner
	378349efe1d23       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   9dfb38e6022a7       busybox-fc5497c4f-xt9l5
	ea2d43ee9b97e       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      About a minute ago   Running             kube-vip                  0                   8e6a479fdea9d       kube-vip-ha-322980
	067837019b5f6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   d0370265c798a       coredns-7db6d8ff4d-fqt45
	2e9a9bfcaf10e       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Exited              kindnet-cni               3                   64801e377a379       kindnet-lwtnx
	06be80792a085       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      About a minute ago   Exited              kube-controller-manager   1                   95e07dfd57148       kube-controller-manager-ha-322980
	858ab02f25618       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   cd2a674999e8a       coredns-7db6d8ff4d-78zmw
	852f56752c643       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      About a minute ago   Running             kube-proxy                1                   e36e99eaa4a61       kube-proxy-8xdzd
	d864b4fda0bb9       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      About a minute ago   Running             kube-scheduler            1                   4777f05174b29       kube-scheduler-ha-322980
	366a7799ffc65       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   55b2bc86d17b3       etcd-ha-322980
	d1ff1f42ee456       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      About a minute ago   Exited              kube-apiserver            2                   e684baf5ef11a       kube-apiserver-ha-322980
	d9743f3da0de5       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   238b5b24a572e       busybox-fc5497c4f-xt9l5
	0b360d142570d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   cd560b1055b35       coredns-7db6d8ff4d-fqt45
	e065fafa4b7aa       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   9f56aff0e5f86       coredns-7db6d8ff4d-78zmw
	4da23c6720461       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      13 minutes ago       Exited              kube-proxy                0                   8b3a42343ade0       kube-proxy-8xdzd
	d73ef383ce1ab       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      14 minutes ago       Exited              kube-scheduler            0                   913466e1710aa       kube-scheduler-ha-322980
	97769959b22d6       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      14 minutes ago       Exited              etcd                      0                   01d81d8dc3bcb       etcd-ha-322980
	
	
	==> coredns [067837019b5f60ac2139d714d999a7c2c585578da4dc3279ac134cd69b882db6] <==
	Trace[1013525662]: [10.737662485s] [10.737662485s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:58534->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:58550->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1024418272]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (05-May-2024 21:28:44.711) (total time: 13247ms):
	Trace[1024418272]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:58550->10.96.0.1:443: read: connection reset by peer 13247ms (21:28:57.959)
	Trace[1024418272]: [13.247425762s] [13.247425762s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:58550->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [0b360d142570dfeb8a0253680ccfd97861e90b700b1ce9e2e5b315244aed2a3b] <==
	[INFO] 10.244.1.2:51278 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00017965s
	[INFO] 10.244.1.2:37849 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000301689s
	[INFO] 10.244.0.4:58808 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118281s
	[INFO] 10.244.0.4:59347 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000074943s
	[INFO] 10.244.0.4:44264 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000127442s
	[INFO] 10.244.0.4:45870 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001035173s
	[INFO] 10.244.0.4:45397 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000126149s
	[INFO] 10.244.2.2:38985 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000241724s
	[INFO] 10.244.1.2:41200 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000185837s
	[INFO] 10.244.0.4:53459 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000188027s
	[INFO] 10.244.0.4:43760 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000146395s
	[INFO] 10.244.2.2:45375 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000112163s
	[INFO] 10.244.2.2:60638 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000225418s
	[INFO] 10.244.1.2:33012 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000251463s
	[INFO] 10.244.0.4:48613 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000079688s
	[INFO] 10.244.0.4:54870 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000050324s
	[INFO] 10.244.0.4:36700 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000167489s
	[INFO] 10.244.0.4:56859 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000077358s
	[INFO] 10.244.2.2:37153 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122063s
	[INFO] 10.244.2.2:43717 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000123902s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [858ab02f25618649e61d3cb75cf0fbd9a3bfbbbd2a9556b0257647ec91b0c2b8] <==
	[INFO] plugin/kubernetes: Trace[1109309593]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (05-May-2024 21:28:44.662) (total time: 10532ms):
	Trace[1109309593]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:45080->10.96.0.1:443: read: connection reset by peer 10530ms (21:28:55.192)
	Trace[1109309593]: [10.532390183s] [10.532390183s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:45080->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:45066->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1560009575]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (05-May-2024 21:28:44.458) (total time: 13499ms):
	Trace[1560009575]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:45066->10.96.0.1:443: read: connection reset by peer 13499ms (21:28:57.958)
	Trace[1560009575]: [13.499965503s] [13.499965503s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:45066->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [e065fafa4b7aad342c5755ddbb2c69c3b4bd0efd44c9ce792388bc6c6f06121d] <==
	[INFO] 10.244.0.4:43928 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00004146s
	[INFO] 10.244.2.2:44358 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001832218s
	[INFO] 10.244.2.2:34081 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00017944s
	[INFO] 10.244.2.2:36047 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000087749s
	[INFO] 10.244.2.2:60557 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001143135s
	[INFO] 10.244.2.2:60835 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000073052s
	[INFO] 10.244.2.2:42876 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000093376s
	[INFO] 10.244.2.2:33057 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000070619s
	[INFO] 10.244.1.2:41910 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00009436s
	[INFO] 10.244.1.2:43839 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082555s
	[INFO] 10.244.1.2:39008 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000075851s
	[INFO] 10.244.0.4:47500 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000110566s
	[INFO] 10.244.0.4:44728 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071752s
	[INFO] 10.244.2.2:38205 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000222144s
	[INFO] 10.244.2.2:46321 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000164371s
	[INFO] 10.244.1.2:41080 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000205837s
	[INFO] 10.244.1.2:58822 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000264144s
	[INFO] 10.244.1.2:55995 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000174393s
	[INFO] 10.244.2.2:46471 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00069286s
	[INFO] 10.244.2.2:52414 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000163744s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-322980
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-322980
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=182cbbc99574885c654f8e32902368a71f76ddd3
	                    minikube.k8s.io/name=ha-322980
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_05T21_16_15_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 05 May 2024 21:16:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-322980
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 05 May 2024 21:30:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 05 May 2024 21:29:18 +0000   Sun, 05 May 2024 21:16:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 05 May 2024 21:29:18 +0000   Sun, 05 May 2024 21:16:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 05 May 2024 21:29:18 +0000   Sun, 05 May 2024 21:16:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 05 May 2024 21:29:18 +0000   Sun, 05 May 2024 21:16:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.178
	  Hostname:    ha-322980
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3a019ec328ab467ca04365748baaa367
	  System UUID:                3a019ec3-28ab-467c-a043-65748baaa367
	  Boot ID:                    c9018f9a-79b9-43c5-a307-9ae120187dfb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xt9l5              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 coredns-7db6d8ff4d-78zmw             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-7db6d8ff4d-fqt45             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-ha-322980                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-lwtnx                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-322980             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-322980    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-8xdzd                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-322980             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-322980                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                   From             Message
	  ----     ------                   ----                  ----             -------
	  Normal   Starting                 13m                   kube-proxy       
	  Normal   Starting                 52s                   kube-proxy       
	  Normal   NodeHasSufficientPID     13m                   kubelet          Node ha-322980 status is now: NodeHasSufficientPID
	  Normal   Starting                 13m                   kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  13m                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  13m                   kubelet          Node ha-322980 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m                   kubelet          Node ha-322980 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           13m                   node-controller  Node ha-322980 event: Registered Node ha-322980 in Controller
	  Normal   NodeReady                13m                   kubelet          Node ha-322980 status is now: NodeReady
	  Normal   RegisteredNode           11m                   node-controller  Node ha-322980 event: Registered Node ha-322980 in Controller
	  Normal   RegisteredNode           10m                   node-controller  Node ha-322980 event: Registered Node ha-322980 in Controller
	  Warning  ContainerGCFailed        113s (x2 over 2m53s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           42s                   node-controller  Node ha-322980 event: Registered Node ha-322980 in Controller
	  Normal   RegisteredNode           36s                   node-controller  Node ha-322980 event: Registered Node ha-322980 in Controller
	
	
	Name:               ha-322980-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-322980-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=182cbbc99574885c654f8e32902368a71f76ddd3
	                    minikube.k8s.io/name=ha-322980
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_05T21_18_15_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 05 May 2024 21:18:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-322980-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 05 May 2024 21:30:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 05 May 2024 21:30:00 +0000   Sun, 05 May 2024 21:29:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 05 May 2024 21:30:00 +0000   Sun, 05 May 2024 21:29:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 05 May 2024 21:30:00 +0000   Sun, 05 May 2024 21:29:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 05 May 2024 21:30:00 +0000   Sun, 05 May 2024 21:29:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.228
	  Hostname:    ha-322980-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c5d1651406694de39b61eff245fccb61
	  System UUID:                c5d16514-0669-4de3-9b61-eff245fccb61
	  Boot ID:                    c80c8e5b-42c8-42c0-ad77-611ed5db7d30
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-tbmdd                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-322980-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-lmgkm                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-322980-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-322980-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-wbf7q                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-322980-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-322980-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 43s                kube-proxy       
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-322980-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-322980-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-322980-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           11m                node-controller  Node ha-322980-m02 event: Registered Node ha-322980-m02 in Controller
	  Normal  RegisteredNode           11m                node-controller  Node ha-322980-m02 event: Registered Node ha-322980-m02 in Controller
	  Normal  RegisteredNode           10m                node-controller  Node ha-322980-m02 event: Registered Node ha-322980-m02 in Controller
	  Normal  NodeNotReady             8m                 node-controller  Node ha-322980-m02 status is now: NodeNotReady
	  Normal  NodeHasSufficientMemory  76s (x8 over 76s)  kubelet          Node ha-322980-m02 status is now: NodeHasSufficientMemory
	  Normal  Starting                 76s                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    76s (x8 over 76s)  kubelet          Node ha-322980-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     76s (x7 over 76s)  kubelet          Node ha-322980-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  76s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           42s                node-controller  Node ha-322980-m02 event: Registered Node ha-322980-m02 in Controller
	  Normal  RegisteredNode           36s                node-controller  Node ha-322980-m02 event: Registered Node ha-322980-m02 in Controller
	
	
	Name:               ha-322980-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-322980-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=182cbbc99574885c654f8e32902368a71f76ddd3
	                    minikube.k8s.io/name=ha-322980
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_05T21_19_27_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 05 May 2024 21:19:23 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-322980-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 05 May 2024 21:24:49 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sun, 05 May 2024 21:19:53 +0000   Sun, 05 May 2024 21:30:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sun, 05 May 2024 21:19:53 +0000   Sun, 05 May 2024 21:30:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sun, 05 May 2024 21:19:53 +0000   Sun, 05 May 2024 21:30:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sun, 05 May 2024 21:19:53 +0000   Sun, 05 May 2024 21:30:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.29
	  Hostname:    ha-322980-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1273ee04f2de426dbabc52e46998b0eb
	  System UUID:                1273ee04-f2de-426d-babc-52e46998b0eb
	  Boot ID:                    35fdaf53-db70-4446-a9c3-71a0744d3bea
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xz268                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-322980-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-ks55j                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-ha-322980-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-ha-322980-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-nqq6b                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-ha-322980-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-vip-ha-322980-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 10m                kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node ha-322980-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node ha-322980-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node ha-322980-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10m                node-controller  Node ha-322980-m03 event: Registered Node ha-322980-m03 in Controller
	  Normal  RegisteredNode           10m                node-controller  Node ha-322980-m03 event: Registered Node ha-322980-m03 in Controller
	  Normal  RegisteredNode           10m                node-controller  Node ha-322980-m03 event: Registered Node ha-322980-m03 in Controller
	  Normal  RegisteredNode           42s                node-controller  Node ha-322980-m03 event: Registered Node ha-322980-m03 in Controller
	  Normal  RegisteredNode           36s                node-controller  Node ha-322980-m03 event: Registered Node ha-322980-m03 in Controller
	  Normal  NodeNotReady             2s                 node-controller  Node ha-322980-m03 status is now: NodeNotReady
	
	
	Name:               ha-322980-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-322980-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=182cbbc99574885c654f8e32902368a71f76ddd3
	                    minikube.k8s.io/name=ha-322980
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_05T21_20_29_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 05 May 2024 21:20:28 +0000
	Taints:             node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-322980-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 05 May 2024 21:24:43 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sun, 05 May 2024 21:21:06 +0000   Sun, 05 May 2024 21:30:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sun, 05 May 2024 21:21:06 +0000   Sun, 05 May 2024 21:30:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sun, 05 May 2024 21:21:06 +0000   Sun, 05 May 2024 21:30:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sun, 05 May 2024 21:21:06 +0000   Sun, 05 May 2024 21:30:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.169
	  Hostname:    ha-322980-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a4c8db3356b24ba197e491501ddbfd49
	  System UUID:                a4c8db33-56b2-4ba1-97e4-91501ddbfd49
	  Boot ID:                    9ee2f344-9fdd-4182-a447-83dc5b12dc4a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-nnc4q       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m39s
	  kube-system                 kube-proxy-68cxr    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m29s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m39s (x3 over 9m40s)  kubelet          Node ha-322980-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m39s (x3 over 9m40s)  kubelet          Node ha-322980-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m39s (x3 over 9m40s)  kubelet          Node ha-322980-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m39s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m38s                  node-controller  Node ha-322980-m04 event: Registered Node ha-322980-m04 in Controller
	  Normal  RegisteredNode           9m37s                  node-controller  Node ha-322980-m04 event: Registered Node ha-322980-m04 in Controller
	  Normal  RegisteredNode           9m35s                  node-controller  Node ha-322980-m04 event: Registered Node ha-322980-m04 in Controller
	  Normal  NodeReady                9m1s                   kubelet          Node ha-322980-m04 status is now: NodeReady
	  Normal  RegisteredNode           42s                    node-controller  Node ha-322980-m04 event: Registered Node ha-322980-m04 in Controller
	  Normal  RegisteredNode           36s                    node-controller  Node ha-322980-m04 event: Registered Node ha-322980-m04 in Controller
	  Normal  NodeNotReady             2s                     node-controller  Node ha-322980-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +8.501831] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.064246] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066779] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.227983] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.115503] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.299594] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +5.048468] systemd-fstab-generator[777]: Ignoring "noauto" option for root device
	[  +0.072016] kauditd_printk_skb: 130 callbacks suppressed
	[May 5 21:16] systemd-fstab-generator[961]: Ignoring "noauto" option for root device
	[  +0.935027] kauditd_printk_skb: 57 callbacks suppressed
	[  +9.150561] systemd-fstab-generator[1376]: Ignoring "noauto" option for root device
	[  +0.089537] kauditd_printk_skb: 40 callbacks suppressed
	[ +11.653864] kauditd_printk_skb: 21 callbacks suppressed
	[May 5 21:18] kauditd_printk_skb: 74 callbacks suppressed
	[May 5 21:28] systemd-fstab-generator[3803]: Ignoring "noauto" option for root device
	[  +0.163899] systemd-fstab-generator[3815]: Ignoring "noauto" option for root device
	[  +0.174337] systemd-fstab-generator[3829]: Ignoring "noauto" option for root device
	[  +0.161075] systemd-fstab-generator[3841]: Ignoring "noauto" option for root device
	[  +0.301050] systemd-fstab-generator[3869]: Ignoring "noauto" option for root device
	[  +0.856611] systemd-fstab-generator[3972]: Ignoring "noauto" option for root device
	[  +4.601668] kauditd_printk_skb: 122 callbacks suppressed
	[ +12.029599] kauditd_printk_skb: 86 callbacks suppressed
	[ +11.080916] kauditd_printk_skb: 1 callbacks suppressed
	[May 5 21:29] kauditd_printk_skb: 1 callbacks suppressed
	[  +8.083309] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [366a7799ffc65796c54bbea26b2f81953df8a6b0fd8399e51c1f3cf1f374ff2a] <==
	{"level":"warn","ts":"2024-05-05T21:29:56.10786Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"8c95a24aec1a1ea5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:29:56.110643Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"8c95a24aec1a1ea5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:29:56.112486Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"8c95a24aec1a1ea5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:29:56.114533Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"8c95a24aec1a1ea5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:29:56.207485Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"8c95a24aec1a1ea5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:29:56.307598Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"8c95a24aec1a1ea5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:29:56.326555Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.29:2380/version","remote-member-id":"8c95a24aec1a1ea5","error":"Get \"https://192.168.39.29:2380/version\": dial tcp 192.168.39.29:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-05-05T21:29:56.326635Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"8c95a24aec1a1ea5","error":"Get \"https://192.168.39.29:2380/version\": dial tcp 192.168.39.29:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-05-05T21:29:56.797621Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"8c95a24aec1a1ea5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:29:56.807832Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"8c95a24aec1a1ea5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:29:56.817434Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"8c95a24aec1a1ea5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:29:56.907884Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"8c95a24aec1a1ea5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:29:56.915391Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"8c95a24aec1a1ea5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:29:57.00786Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"8c95a24aec1a1ea5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:29:57.107911Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"8c95a24aec1a1ea5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:29:57.208202Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"8c95a24aec1a1ea5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:29:57.307869Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"8c95a24aec1a1ea5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:29:58.402875Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"8c95a24aec1a1ea5","rtt":"0s","error":"dial tcp 192.168.39.29:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:29:58.402883Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"8c95a24aec1a1ea5","rtt":"0s","error":"dial tcp 192.168.39.29:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:30:00.329526Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.29:2380/version","remote-member-id":"8c95a24aec1a1ea5","error":"Get \"https://192.168.39.29:2380/version\": dial tcp 192.168.39.29:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:30:00.329598Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"8c95a24aec1a1ea5","error":"Get \"https://192.168.39.29:2380/version\": dial tcp 192.168.39.29:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:30:03.403963Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"8c95a24aec1a1ea5","rtt":"0s","error":"dial tcp 192.168.39.29:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:30:03.40398Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"8c95a24aec1a1ea5","rtt":"0s","error":"dial tcp 192.168.39.29:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:30:04.331623Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.29:2380/version","remote-member-id":"8c95a24aec1a1ea5","error":"Get \"https://192.168.39.29:2380/version\": dial tcp 192.168.39.29:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:30:04.331672Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"8c95a24aec1a1ea5","error":"Get \"https://192.168.39.29:2380/version\": dial tcp 192.168.39.29:2380: connect: connection refused"}
	
	
	==> etcd [97769959b22d6e891106c90e8ae981d40cb02ac960e23bdfb007b3c50d50c923] <==
	{"level":"info","ts":"2024-05-05T21:26:54.126418Z","caller":"traceutil/trace.go:171","msg":"trace[1422100536] range","detail":"{range_begin:/registry/priorityclasses/; range_end:/registry/priorityclasses0; }","duration":"439.354424ms","start":"2024-05-05T21:26:53.687056Z","end":"2024-05-05T21:26:54.126411Z","steps":["trace[1422100536] 'agreement among raft nodes before linearized reading'  (duration: 436.623479ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-05T21:26:54.126525Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-05T21:26:53.687052Z","time spent":"439.463134ms","remote":"127.0.0.1:43734","response type":"/etcdserverpb.KV/Range","request count":0,"request size":59,"response count":0,"response size":0,"request content":"key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" limit:10000 "}
	2024/05/05 21:26:54 WARNING: [core] [Server #5] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/05/05 21:26:54 WARNING: [core] [Server #5] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/05/05 21:26:54 WARNING: [core] [Server #5] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-05-05T21:26:54.176083Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.178:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-05T21:26:54.176142Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.178:2379: use of closed network connection"}
	{"level":"info","ts":"2024-05-05T21:26:54.176235Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"dced536bf07718ca","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-05-05T21:26:54.176415Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"a1efc654ffe9f445"}
	{"level":"info","ts":"2024-05-05T21:26:54.176459Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"a1efc654ffe9f445"}
	{"level":"info","ts":"2024-05-05T21:26:54.17649Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"a1efc654ffe9f445"}
	{"level":"info","ts":"2024-05-05T21:26:54.176585Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445"}
	{"level":"info","ts":"2024-05-05T21:26:54.176654Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445"}
	{"level":"info","ts":"2024-05-05T21:26:54.176819Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445"}
	{"level":"info","ts":"2024-05-05T21:26:54.17686Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"a1efc654ffe9f445"}
	{"level":"info","ts":"2024-05-05T21:26:54.176869Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"8c95a24aec1a1ea5"}
	{"level":"info","ts":"2024-05-05T21:26:54.176883Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"8c95a24aec1a1ea5"}
	{"level":"info","ts":"2024-05-05T21:26:54.176901Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"8c95a24aec1a1ea5"}
	{"level":"info","ts":"2024-05-05T21:26:54.177003Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"dced536bf07718ca","remote-peer-id":"8c95a24aec1a1ea5"}
	{"level":"info","ts":"2024-05-05T21:26:54.177072Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"dced536bf07718ca","remote-peer-id":"8c95a24aec1a1ea5"}
	{"level":"info","ts":"2024-05-05T21:26:54.177105Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"dced536bf07718ca","remote-peer-id":"8c95a24aec1a1ea5"}
	{"level":"info","ts":"2024-05-05T21:26:54.177115Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"8c95a24aec1a1ea5"}
	{"level":"info","ts":"2024-05-05T21:26:54.180402Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.178:2380"}
	{"level":"info","ts":"2024-05-05T21:26:54.180578Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.178:2380"}
	{"level":"info","ts":"2024-05-05T21:26:54.180616Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-322980","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.178:2380"],"advertise-client-urls":["https://192.168.39.178:2379"]}
	
	
	==> kernel <==
	 21:30:07 up 14 min,  0 users,  load average: 1.03, 0.74, 0.41
	Linux ha-322980 5.10.207 #1 SMP Tue Apr 30 22:38:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2e9a9bfcaf10e05ee630f2bf1bb282bdfa63fde7a77a450390e8b38f43e429d4] <==
	I0505 21:28:33.366089       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0505 21:28:33.366248       1 main.go:107] hostIP = 192.168.39.178
	podIP = 192.168.39.178
	I0505 21:28:33.366430       1 main.go:116] setting mtu 1500 for CNI 
	I0505 21:28:33.366469       1 main.go:146] kindnetd IP family: "ipv4"
	I0505 21:28:33.366564       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0505 21:28:36.455538       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0505 21:28:39.527033       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0505 21:28:50.536997       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0505 21:28:55.189449       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 192.168.122.78:51834->10.96.0.1:443: read: connection reset by peer
	I0505 21:28:58.192741       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kindnet [d8e5582057ffa7695636c3c49a29af71b47c3e0e49d6d4f28aed3cb84503b54e] <==
	I0505 21:29:43.859530       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0505 21:29:44.460640       1 main.go:223] Handling node with IPs: map[192.168.39.178:{}]
	I0505 21:29:44.460733       1 main.go:227] handling current node
	I0505 21:29:44.464875       1 main.go:223] Handling node with IPs: map[192.168.39.228:{}]
	I0505 21:29:44.465156       1 main.go:250] Node ha-322980-m02 has CIDR [10.244.1.0/24] 
	I0505 21:29:44.465740       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0505 21:29:44.465833       1 main.go:250] Node ha-322980-m03 has CIDR [10.244.2.0/24] 
	I0505 21:29:44.466009       1 main.go:223] Handling node with IPs: map[192.168.39.169:{}]
	I0505 21:29:44.466018       1 main.go:250] Node ha-322980-m04 has CIDR [10.244.3.0/24] 
	I0505 21:29:54.483489       1 main.go:223] Handling node with IPs: map[192.168.39.178:{}]
	I0505 21:29:54.483510       1 main.go:227] handling current node
	I0505 21:29:54.483519       1 main.go:223] Handling node with IPs: map[192.168.39.228:{}]
	I0505 21:29:54.483524       1 main.go:250] Node ha-322980-m02 has CIDR [10.244.1.0/24] 
	I0505 21:29:54.483635       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0505 21:29:54.483676       1 main.go:250] Node ha-322980-m03 has CIDR [10.244.2.0/24] 
	I0505 21:29:54.483744       1 main.go:223] Handling node with IPs: map[192.168.39.169:{}]
	I0505 21:29:54.483838       1 main.go:250] Node ha-322980-m04 has CIDR [10.244.3.0/24] 
	I0505 21:30:04.500852       1 main.go:223] Handling node with IPs: map[192.168.39.178:{}]
	I0505 21:30:04.500874       1 main.go:227] handling current node
	I0505 21:30:04.500883       1 main.go:223] Handling node with IPs: map[192.168.39.228:{}]
	I0505 21:30:04.500888       1 main.go:250] Node ha-322980-m02 has CIDR [10.244.1.0/24] 
	I0505 21:30:04.500984       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0505 21:30:04.500989       1 main.go:250] Node ha-322980-m03 has CIDR [10.244.2.0/24] 
	I0505 21:30:04.501029       1 main.go:223] Handling node with IPs: map[192.168.39.169:{}]
	I0505 21:30:04.501033       1 main.go:250] Node ha-322980-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [a6a90eca6999f3420eb8cc58e5af0b595de2c1bbf36f04f08d7edea98e8cef1d] <==
	I0505 21:29:14.675369       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0505 21:29:14.675740       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0505 21:29:14.747127       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0505 21:29:14.748867       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0505 21:29:14.749586       1 shared_informer.go:320] Caches are synced for configmaps
	I0505 21:29:14.753732       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0505 21:29:14.753841       1 aggregator.go:165] initial CRD sync complete...
	I0505 21:29:14.753861       1 autoregister_controller.go:141] Starting autoregister controller
	I0505 21:29:14.753867       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0505 21:29:14.753872       1 cache.go:39] Caches are synced for autoregister controller
	I0505 21:29:14.756745       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	W0505 21:29:14.764451       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.228 192.168.39.29]
	I0505 21:29:14.776493       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0505 21:29:14.778408       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0505 21:29:14.778419       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0505 21:29:14.810622       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0505 21:29:14.811940       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0505 21:29:14.811990       1 policy_source.go:224] refreshing policies
	I0505 21:29:14.844983       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0505 21:29:14.866878       1 controller.go:615] quota admission added evaluator for: endpoints
	I0505 21:29:14.882093       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0505 21:29:14.889341       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0505 21:29:15.655887       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0505 21:29:16.108522       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.178 192.168.39.228 192.168.39.29]
	W0505 21:29:26.114517       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.178 192.168.39.228]
	
	
	==> kube-apiserver [d1ff1f42ee456adbb1ab902c56155f156eb2f298a79dc46f7d316794adc69f37] <==
	I0505 21:28:33.077501       1 options.go:221] external host was not specified, using 192.168.39.178
	I0505 21:28:33.079214       1 server.go:148] Version: v1.30.0
	I0505 21:28:33.079278       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0505 21:28:34.149386       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0505 21:28:34.175562       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0505 21:28:34.176231       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0505 21:28:34.181000       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0505 21:28:34.181265       1 instance.go:299] Using reconciler: lease
	W0505 21:28:54.146228       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0505 21:28:54.146452       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0505 21:28:54.181961       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [06be80792a085f9dea48d4219f306149664769dd55a7c6bc24d984254df5fc7d] <==
	I0505 21:28:34.536857       1 serving.go:380] Generated self-signed cert in-memory
	I0505 21:28:34.963263       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0505 21:28:34.963366       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0505 21:28:34.965468       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0505 21:28:34.965623       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0505 21:28:34.966267       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0505 21:28:34.966207       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	E0505 21:28:55.190151       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.178:8443/healthz\": dial tcp 192.168.39.178:8443: connect: connection refused"
	
	
	==> kube-controller-manager [b48ee84cd3ceb82bcfda671b85eb4b1b2793a18718f5c445445feaf19173b3d9] <==
	I0505 21:29:31.423617       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0505 21:29:31.425126       1 shared_informer.go:320] Caches are synced for endpoint
	I0505 21:29:31.434153       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0505 21:29:31.440133       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0505 21:29:31.448885       1 shared_informer.go:320] Caches are synced for attach detach
	I0505 21:29:31.453134       1 shared_informer.go:320] Caches are synced for ephemeral
	I0505 21:29:31.474194       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0505 21:29:31.491709       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0505 21:29:31.501315       1 shared_informer.go:320] Caches are synced for deployment
	I0505 21:29:31.504733       1 shared_informer.go:320] Caches are synced for crt configmap
	I0505 21:29:31.597296       1 shared_informer.go:320] Caches are synced for stateful set
	I0505 21:29:31.628093       1 shared_informer.go:320] Caches are synced for resource quota
	I0505 21:29:31.628297       1 shared_informer.go:320] Caches are synced for resource quota
	I0505 21:29:31.652424       1 shared_informer.go:320] Caches are synced for disruption
	I0505 21:29:32.045082       1 shared_informer.go:320] Caches are synced for garbage collector
	I0505 21:29:32.086936       1 shared_informer.go:320] Caches are synced for garbage collector
	I0505 21:29:32.087026       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0505 21:29:38.137225       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-zwjpl EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-zwjpl\": the object has been modified; please apply your changes to the latest version and try again"
	I0505 21:29:38.138469       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"bd0b72d6-9faf-4581-8043-a8dc8030d953", APIVersion:"v1", ResourceVersion:"239", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-zwjpl EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-zwjpl": the object has been modified; please apply your changes to the latest version and try again
	I0505 21:29:38.217559       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="124.82676ms"
	I0505 21:29:38.217748       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="123.485µs"
	I0505 21:29:38.240587       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="22.093008ms"
	I0505 21:29:38.241056       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="136.282µs"
	I0505 21:30:05.900371       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.249589ms"
	I0505 21:30:05.900889       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="165.064µs"
	
	
	==> kube-proxy [4da23c67204614119fd12bb11bb2dcc384609fe399a32f06e2ca61ff52a8438c] <==
	E0505 21:25:37.638259       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-322980&resourceVersion=2067": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:25:37.638443       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2042": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:25:37.638505       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2042": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:25:40.774308       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2089": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:25:40.774433       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2089": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:25:43.846169       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2042": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:25:43.846281       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2042": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:25:43.846378       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-322980&resourceVersion=2067": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:25:43.846415       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-322980&resourceVersion=2067": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:25:49.288357       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2089": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:25:49.288477       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2089": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:25:52.359464       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-322980&resourceVersion=2067": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:25:52.359604       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-322980&resourceVersion=2067": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:25:52.359850       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2042": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:25:52.360135       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2042": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:26:01.575624       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2089": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:26:01.575831       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2089": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:26:07.718256       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-322980&resourceVersion=2067": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:26:07.718308       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-322980&resourceVersion=2067": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:26:16.935169       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2042": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:26:16.935384       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2042": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:26:26.151522       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2089": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:26:26.151611       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2089": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:26:50.729138       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2042": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:26:50.729278       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2042": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [852f56752c6434b6c374dd0257366bf739e210aee07f80277d63776fd528299b] <==
	E0505 21:28:56.678399       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-322980\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0505 21:29:15.111476       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-322980\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0505 21:29:15.111631       1 server.go:1032] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	I0505 21:29:15.158669       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0505 21:29:15.159039       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0505 21:29:15.159098       1 server_linux.go:165] "Using iptables Proxier"
	I0505 21:29:15.162311       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0505 21:29:15.162696       1 server.go:872] "Version info" version="v1.30.0"
	I0505 21:29:15.162874       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0505 21:29:15.164823       1 config.go:192] "Starting service config controller"
	I0505 21:29:15.164887       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0505 21:29:15.164934       1 config.go:101] "Starting endpoint slice config controller"
	I0505 21:29:15.164953       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0505 21:29:15.165937       1 config.go:319] "Starting node config controller"
	I0505 21:29:15.165973       1 shared_informer.go:313] Waiting for caches to sync for node config
	W0505 21:29:18.185437       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-322980&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:29:18.185597       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-322980&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:29:18.185705       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:29:18.185878       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:29:18.185973       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:29:18.186035       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:29:18.186283       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0505 21:29:19.368732       1 shared_informer.go:320] Caches are synced for service config
	I0505 21:29:19.565743       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0505 21:29:19.566403       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d73ef383ce1ab83dd2b2ada78891a4aa3835de2b4805157ac3e15584d0b4b29b] <==
	W0505 21:26:48.759656       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0505 21:26:48.759854       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0505 21:26:49.007969       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0505 21:26:49.008025       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0505 21:26:49.192688       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0505 21:26:49.192899       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0505 21:26:49.499913       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0505 21:26:49.500129       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0505 21:26:49.690540       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0505 21:26:49.690638       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0505 21:26:49.803920       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0505 21:26:49.804019       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0505 21:26:49.837414       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0505 21:26:49.837447       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0505 21:26:50.166566       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0505 21:26:50.166673       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0505 21:26:50.189955       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0505 21:26:50.190063       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0505 21:26:50.388221       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0505 21:26:50.388612       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0505 21:26:51.048541       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0505 21:26:51.048638       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0505 21:26:51.142262       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0505 21:26:51.142364       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0505 21:26:54.094710       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d864b4fda0bb945209c47a2404e8a012f5dac000a178c964de8c1e8cc8cb9a9f] <==
	W0505 21:29:04.997573       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.178:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.178:8443: connect: connection refused
	E0505 21:29:04.997652       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.178:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.178:8443: connect: connection refused
	W0505 21:29:05.128425       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.178:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.178:8443: connect: connection refused
	E0505 21:29:05.128575       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.178:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.178:8443: connect: connection refused
	W0505 21:29:05.147199       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.178:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.178:8443: connect: connection refused
	E0505 21:29:05.147341       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.178:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.178:8443: connect: connection refused
	W0505 21:29:05.216455       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.178:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.178:8443: connect: connection refused
	E0505 21:29:05.216557       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.178:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.178:8443: connect: connection refused
	W0505 21:29:05.321585       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.178:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.178:8443: connect: connection refused
	E0505 21:29:05.321664       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.178:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.178:8443: connect: connection refused
	W0505 21:29:09.702065       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.178:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.178:8443: connect: connection refused
	E0505 21:29:09.702200       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.178:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.178:8443: connect: connection refused
	W0505 21:29:10.927665       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.178:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.178:8443: connect: connection refused
	E0505 21:29:10.927720       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.178:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.178:8443: connect: connection refused
	W0505 21:29:12.398235       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.178:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.178:8443: connect: connection refused
	E0505 21:29:12.398339       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.178:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.178:8443: connect: connection refused
	W0505 21:29:14.689401       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0505 21:29:14.691913       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0505 21:29:14.723694       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0505 21:29:14.723815       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0505 21:29:14.723924       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0505 21:29:14.723961       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0505 21:29:14.727090       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0505 21:29:14.727163       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0505 21:29:39.409575       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 05 21:29:14 ha-322980 kubelet[1385]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 05 21:29:14 ha-322980 kubelet[1385]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 05 21:29:14 ha-322980 kubelet[1385]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 05 21:29:14 ha-322980 kubelet[1385]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 05 21:29:14 ha-322980 kubelet[1385]: I0505 21:29:14.452735    1385 scope.go:117] "RemoveContainer" containerID="8f325a9ea25d6ff0517a638bff175fe1f4c646916941e4d3a93f5ff6f13f0187"
	May 05 21:29:15 ha-322980 kubelet[1385]: E0505 21:29:15.110143    1385 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-322980\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-322980?resourceVersion=0&timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	May 05 21:29:15 ha-322980 kubelet[1385]: I0505 21:29:15.110160    1385 status_manager.go:853] "Failed to get status for pod" podUID="578ccf60a9d00c195d5069c63fb0b319" pod="kube-system/kube-controller-manager-ha-322980" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-322980\": dial tcp 192.168.39.254:8443: connect: no route to host"
	May 05 21:29:18 ha-322980 kubelet[1385]: E0505 21:29:18.182266    1385 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-322980\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-322980?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	May 05 21:29:18 ha-322980 kubelet[1385]: E0505 21:29:18.183048    1385 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-322980?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host" interval="7s"
	May 05 21:29:18 ha-322980 kubelet[1385]: I0505 21:29:18.183205    1385 status_manager.go:853] "Failed to get status for pod" podUID="d0b6492d-c0f5-45dd-8482-c447b81daa66" pod="kube-system/kube-proxy-8xdzd" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8xdzd\": dial tcp 192.168.39.254:8443: connect: no route to host"
	May 05 21:29:18 ha-322980 kubelet[1385]: I0505 21:29:18.378035    1385 scope.go:117] "RemoveContainer" containerID="06be80792a085f9dea48d4219f306149664769dd55a7c6bc24d984254df5fc7d"
	May 05 21:29:27 ha-322980 kubelet[1385]: I0505 21:29:27.377745    1385 scope.go:117] "RemoveContainer" containerID="0c012cc95d188bdded0cf101970a7bcf34d1c2860b11847a187db706e95a0138"
	May 05 21:29:27 ha-322980 kubelet[1385]: E0505 21:29:27.378418    1385 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(bc212ac3-7499-4edc-b5a5-622b0bd4a891)\"" pod="kube-system/storage-provisioner" podUID="bc212ac3-7499-4edc-b5a5-622b0bd4a891"
	May 05 21:29:29 ha-322980 kubelet[1385]: I0505 21:29:29.377956    1385 scope.go:117] "RemoveContainer" containerID="2e9a9bfcaf10e05ee630f2bf1bb282bdfa63fde7a77a450390e8b38f43e429d4"
	May 05 21:29:29 ha-322980 kubelet[1385]: E0505 21:29:29.378401    1385 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-lwtnx_kube-system(4033535e-69f1-426c-bb17-831fad6336d5)\"" pod="kube-system/kindnet-lwtnx" podUID="4033535e-69f1-426c-bb17-831fad6336d5"
	May 05 21:29:40 ha-322980 kubelet[1385]: I0505 21:29:40.378180    1385 scope.go:117] "RemoveContainer" containerID="0c012cc95d188bdded0cf101970a7bcf34d1c2860b11847a187db706e95a0138"
	May 05 21:29:40 ha-322980 kubelet[1385]: E0505 21:29:40.380981    1385 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(bc212ac3-7499-4edc-b5a5-622b0bd4a891)\"" pod="kube-system/storage-provisioner" podUID="bc212ac3-7499-4edc-b5a5-622b0bd4a891"
	May 05 21:29:43 ha-322980 kubelet[1385]: I0505 21:29:43.378120    1385 scope.go:117] "RemoveContainer" containerID="2e9a9bfcaf10e05ee630f2bf1bb282bdfa63fde7a77a450390e8b38f43e429d4"
	May 05 21:29:51 ha-322980 kubelet[1385]: I0505 21:29:51.378189    1385 scope.go:117] "RemoveContainer" containerID="0c012cc95d188bdded0cf101970a7bcf34d1c2860b11847a187db706e95a0138"
	May 05 21:29:51 ha-322980 kubelet[1385]: E0505 21:29:51.378643    1385 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(bc212ac3-7499-4edc-b5a5-622b0bd4a891)\"" pod="kube-system/storage-provisioner" podUID="bc212ac3-7499-4edc-b5a5-622b0bd4a891"
	May 05 21:29:54 ha-322980 kubelet[1385]: I0505 21:29:54.378500    1385 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-322980" podUID="8743dbcc-49f9-46e8-8088-cd5020429c08"
	May 05 21:29:54 ha-322980 kubelet[1385]: I0505 21:29:54.400896    1385 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-322980"
	May 05 21:29:55 ha-322980 kubelet[1385]: I0505 21:29:55.028652    1385 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-322980" podUID="8743dbcc-49f9-46e8-8088-cd5020429c08"
	May 05 21:30:04 ha-322980 kubelet[1385]: I0505 21:30:04.378605    1385 scope.go:117] "RemoveContainer" containerID="0c012cc95d188bdded0cf101970a7bcf34d1c2860b11847a187db706e95a0138"
	May 05 21:30:05 ha-322980 kubelet[1385]: I0505 21:30:05.101417    1385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-322980" podStartSLOduration=11.101388283 podStartE2EDuration="11.101388283s" podCreationTimestamp="2024-05-05 21:29:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-05 21:30:04.427350229 +0000 UTC m=+830.199064321" watchObservedRunningTime="2024-05-05 21:30:05.101388283 +0000 UTC m=+830.873102377"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0505 21:30:06.314774   37391 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18602-11466/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-322980 -n ha-322980
helpers_test.go:261: (dbg) Run:  kubectl --context ha-322980 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (318.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (7.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-322980 node delete m03 -v=7 --alsologtostderr: (4.644075102s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-322980 status -v=7 --alsologtostderr: exit status 7 (496.575314ms)

                                                
                                                
-- stdout --
	ha-322980
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-322980-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-322980-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 21:30:13.284131   37621 out.go:291] Setting OutFile to fd 1 ...
	I0505 21:30:13.284238   37621 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 21:30:13.284250   37621 out.go:304] Setting ErrFile to fd 2...
	I0505 21:30:13.284256   37621 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 21:30:13.284462   37621 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18602-11466/.minikube/bin
	I0505 21:30:13.284666   37621 out.go:298] Setting JSON to false
	I0505 21:30:13.284693   37621 mustload.go:65] Loading cluster: ha-322980
	I0505 21:30:13.284769   37621 notify.go:220] Checking for updates...
	I0505 21:30:13.285066   37621 config.go:182] Loaded profile config "ha-322980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 21:30:13.285082   37621 status.go:255] checking status of ha-322980 ...
	I0505 21:30:13.285522   37621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:30:13.285595   37621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:30:13.301563   37621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42771
	I0505 21:30:13.302001   37621 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:30:13.302539   37621 main.go:141] libmachine: Using API Version  1
	I0505 21:30:13.302565   37621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:30:13.303001   37621 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:30:13.303214   37621 main.go:141] libmachine: (ha-322980) Calling .GetState
	I0505 21:30:13.305499   37621 status.go:330] ha-322980 host status = "Running" (err=<nil>)
	I0505 21:30:13.305525   37621 host.go:66] Checking if "ha-322980" exists ...
	I0505 21:30:13.305825   37621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:30:13.305864   37621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:30:13.321884   37621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36717
	I0505 21:30:13.322462   37621 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:30:13.322956   37621 main.go:141] libmachine: Using API Version  1
	I0505 21:30:13.322977   37621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:30:13.323329   37621 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:30:13.323544   37621 main.go:141] libmachine: (ha-322980) Calling .GetIP
	I0505 21:30:13.326620   37621 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:30:13.327064   37621 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:30:13.327101   37621 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:30:13.327273   37621 host.go:66] Checking if "ha-322980" exists ...
	I0505 21:30:13.327658   37621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:30:13.327703   37621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:30:13.344192   37621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46187
	I0505 21:30:13.344571   37621 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:30:13.345021   37621 main.go:141] libmachine: Using API Version  1
	I0505 21:30:13.345043   37621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:30:13.345355   37621 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:30:13.345537   37621 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:30:13.345758   37621 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0505 21:30:13.345782   37621 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:30:13.348753   37621 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:30:13.349200   37621 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:30:13.349221   37621 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:30:13.349448   37621 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:30:13.349653   37621 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:30:13.349786   37621 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:30:13.349944   37621 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa Username:docker}
	I0505 21:30:13.433197   37621 ssh_runner.go:195] Run: systemctl --version
	I0505 21:30:13.443004   37621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 21:30:13.464199   37621 kubeconfig.go:125] found "ha-322980" server: "https://192.168.39.254:8443"
	I0505 21:30:13.464244   37621 api_server.go:166] Checking apiserver status ...
	I0505 21:30:13.464299   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 21:30:13.483461   37621 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5157/cgroup
	W0505 21:30:13.494237   37621 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5157/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0505 21:30:13.494298   37621 ssh_runner.go:195] Run: ls
	I0505 21:30:13.499914   37621 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0505 21:30:13.504933   37621 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0505 21:30:13.504969   37621 status.go:422] ha-322980 apiserver status = Running (err=<nil>)
	I0505 21:30:13.504982   37621 status.go:257] ha-322980 status: &{Name:ha-322980 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0505 21:30:13.505004   37621 status.go:255] checking status of ha-322980-m02 ...
	I0505 21:30:13.505419   37621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:30:13.505471   37621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:30:13.521932   37621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38461
	I0505 21:30:13.522372   37621 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:30:13.522855   37621 main.go:141] libmachine: Using API Version  1
	I0505 21:30:13.522882   37621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:30:13.523286   37621 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:30:13.523514   37621 main.go:141] libmachine: (ha-322980-m02) Calling .GetState
	I0505 21:30:13.525209   37621 status.go:330] ha-322980-m02 host status = "Running" (err=<nil>)
	I0505 21:30:13.525224   37621 host.go:66] Checking if "ha-322980-m02" exists ...
	I0505 21:30:13.525538   37621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:30:13.525577   37621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:30:13.540434   37621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45113
	I0505 21:30:13.540949   37621 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:30:13.541383   37621 main.go:141] libmachine: Using API Version  1
	I0505 21:30:13.541406   37621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:30:13.541753   37621 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:30:13.541984   37621 main.go:141] libmachine: (ha-322980-m02) Calling .GetIP
	I0505 21:30:13.545044   37621 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:30:13.545529   37621 main.go:141] libmachine: (ha-322980-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:59:b4", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:28:40 +0000 UTC Type:0 Mac:52:54:00:91:59:b4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-322980-m02 Clientid:01:52:54:00:91:59:b4}
	I0505 21:30:13.545563   37621 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:30:13.545700   37621 host.go:66] Checking if "ha-322980-m02" exists ...
	I0505 21:30:13.546029   37621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:30:13.546075   37621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:30:13.561671   37621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35637
	I0505 21:30:13.562091   37621 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:30:13.562510   37621 main.go:141] libmachine: Using API Version  1
	I0505 21:30:13.562532   37621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:30:13.562817   37621 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:30:13.563055   37621 main.go:141] libmachine: (ha-322980-m02) Calling .DriverName
	I0505 21:30:13.563239   37621 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0505 21:30:13.563261   37621 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHHostname
	I0505 21:30:13.565928   37621 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:30:13.566340   37621 main.go:141] libmachine: (ha-322980-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:59:b4", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:28:40 +0000 UTC Type:0 Mac:52:54:00:91:59:b4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-322980-m02 Clientid:01:52:54:00:91:59:b4}
	I0505 21:30:13.566374   37621 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:30:13.566503   37621 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHPort
	I0505 21:30:13.566655   37621 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHKeyPath
	I0505 21:30:13.566805   37621 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHUsername
	I0505 21:30:13.566955   37621 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m02/id_rsa Username:docker}
	I0505 21:30:13.649841   37621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 21:30:13.666863   37621 kubeconfig.go:125] found "ha-322980" server: "https://192.168.39.254:8443"
	I0505 21:30:13.666901   37621 api_server.go:166] Checking apiserver status ...
	I0505 21:30:13.666942   37621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 21:30:13.683202   37621 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1382/cgroup
	W0505 21:30:13.694920   37621 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1382/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0505 21:30:13.694972   37621 ssh_runner.go:195] Run: ls
	I0505 21:30:13.701162   37621 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0505 21:30:13.706794   37621 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0505 21:30:13.706817   37621 status.go:422] ha-322980-m02 apiserver status = Running (err=<nil>)
	I0505 21:30:13.706842   37621 status.go:257] ha-322980-m02 status: &{Name:ha-322980-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0505 21:30:13.706865   37621 status.go:255] checking status of ha-322980-m04 ...
	I0505 21:30:13.707145   37621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:30:13.707186   37621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:30:13.722844   37621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34497
	I0505 21:30:13.723309   37621 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:30:13.723819   37621 main.go:141] libmachine: Using API Version  1
	I0505 21:30:13.723842   37621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:30:13.724136   37621 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:30:13.724311   37621 main.go:141] libmachine: (ha-322980-m04) Calling .GetState
	I0505 21:30:13.725829   37621 status.go:330] ha-322980-m04 host status = "Stopped" (err=<nil>)
	I0505 21:30:13.725848   37621 status.go:343] host is not running, skipping remaining checks
	I0505 21:30:13.725857   37621 status.go:257] ha-322980-m04 status: &{Name:ha-322980-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-322980 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-322980 -n ha-322980
helpers_test.go:244: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-322980 logs -n 25: (1.977322111s)
helpers_test.go:252: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-322980 ssh -n                                                                 | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n ha-322980-m02 sudo cat                                          | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | /home/docker/cp-test_ha-322980-m03_ha-322980-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-322980 cp ha-322980-m03:/home/docker/cp-test.txt                              | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m04:/home/docker/cp-test_ha-322980-m03_ha-322980-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n                                                                 | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n ha-322980-m04 sudo cat                                          | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | /home/docker/cp-test_ha-322980-m03_ha-322980-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-322980 cp testdata/cp-test.txt                                                | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n                                                                 | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-322980 cp ha-322980-m04:/home/docker/cp-test.txt                              | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3379972730/001/cp-test_ha-322980-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n                                                                 | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-322980 cp ha-322980-m04:/home/docker/cp-test.txt                              | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980:/home/docker/cp-test_ha-322980-m04_ha-322980.txt                       |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n                                                                 | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n ha-322980 sudo cat                                              | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | /home/docker/cp-test_ha-322980-m04_ha-322980.txt                                 |           |         |         |                     |                     |
	| cp      | ha-322980 cp ha-322980-m04:/home/docker/cp-test.txt                              | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m02:/home/docker/cp-test_ha-322980-m04_ha-322980-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n                                                                 | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n ha-322980-m02 sudo cat                                          | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | /home/docker/cp-test_ha-322980-m04_ha-322980-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-322980 cp ha-322980-m04:/home/docker/cp-test.txt                              | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m03:/home/docker/cp-test_ha-322980-m04_ha-322980-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n                                                                 | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n ha-322980-m03 sudo cat                                          | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | /home/docker/cp-test_ha-322980-m04_ha-322980-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-322980 node stop m02 -v=7                                                     | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-322980 node start m02 -v=7                                                    | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:23 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-322980 -v=7                                                           | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:24 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-322980 -v=7                                                                | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:24 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-322980 --wait=true -v=7                                                    | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:26 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-322980                                                                | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:30 UTC |                     |
	| node    | ha-322980 node delete m03 -v=7                                                   | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:30 UTC | 05 May 24 21:30 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/05 21:26:53
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0505 21:26:53.140232   36399 out.go:291] Setting OutFile to fd 1 ...
	I0505 21:26:53.140470   36399 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 21:26:53.140481   36399 out.go:304] Setting ErrFile to fd 2...
	I0505 21:26:53.140485   36399 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 21:26:53.140670   36399 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18602-11466/.minikube/bin
	I0505 21:26:53.141198   36399 out.go:298] Setting JSON to false
	I0505 21:26:53.142084   36399 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4160,"bootTime":1714940253,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0505 21:26:53.142153   36399 start.go:139] virtualization: kvm guest
	I0505 21:26:53.144497   36399 out.go:177] * [ha-322980] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0505 21:26:53.146260   36399 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 21:26:53.146193   36399 notify.go:220] Checking for updates...
	I0505 21:26:53.148784   36399 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 21:26:53.150106   36399 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18602-11466/kubeconfig
	I0505 21:26:53.151383   36399 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18602-11466/.minikube
	I0505 21:26:53.152533   36399 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0505 21:26:53.153673   36399 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 21:26:53.155327   36399 config.go:182] Loaded profile config "ha-322980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 21:26:53.155445   36399 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 21:26:53.155966   36399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:26:53.156031   36399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:26:53.171200   36399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44147
	I0505 21:26:53.171619   36399 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:26:53.172129   36399 main.go:141] libmachine: Using API Version  1
	I0505 21:26:53.172150   36399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:26:53.172473   36399 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:26:53.172681   36399 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:26:53.208543   36399 out.go:177] * Using the kvm2 driver based on existing profile
	I0505 21:26:53.209967   36399 start.go:297] selected driver: kvm2
	I0505 21:26:53.209989   36399 start.go:901] validating driver "kvm2" against &{Name:ha-322980 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.0 ClusterName:ha-322980 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.178 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.29 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.169 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 21:26:53.210123   36399 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 21:26:53.210493   36399 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 21:26:53.210573   36399 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18602-11466/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0505 21:26:53.224851   36399 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0505 21:26:53.225522   36399 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0505 21:26:53.225581   36399 cni.go:84] Creating CNI manager for ""
	I0505 21:26:53.225592   36399 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0505 21:26:53.225643   36399 start.go:340] cluster config:
	{Name:ha-322980 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-322980 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.178 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.29 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.169 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 21:26:53.225764   36399 iso.go:125] acquiring lock: {Name:mk05a37c5afd8d748706c016c1665e3846f7161e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 21:26:53.228370   36399 out.go:177] * Starting "ha-322980" primary control-plane node in "ha-322980" cluster
	I0505 21:26:53.230047   36399 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0505 21:26:53.230086   36399 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18602-11466/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0505 21:26:53.230093   36399 cache.go:56] Caching tarball of preloaded images
	I0505 21:26:53.230188   36399 preload.go:173] Found /home/jenkins/minikube-integration/18602-11466/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0505 21:26:53.230200   36399 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0505 21:26:53.230314   36399 profile.go:143] Saving config to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/config.json ...
	I0505 21:26:53.230520   36399 start.go:360] acquireMachinesLock for ha-322980: {Name:mk7f653ea73351d572a4896c6b37d2e7aa44b4ac Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 21:26:53.230568   36399 start.go:364] duration metric: took 30.264µs to acquireMachinesLock for "ha-322980"
	I0505 21:26:53.230584   36399 start.go:96] Skipping create...Using existing machine configuration
	I0505 21:26:53.230594   36399 fix.go:54] fixHost starting: 
	I0505 21:26:53.230851   36399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:26:53.230880   36399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:26:53.244841   36399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39917
	I0505 21:26:53.245311   36399 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:26:53.245787   36399 main.go:141] libmachine: Using API Version  1
	I0505 21:26:53.245816   36399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:26:53.246134   36399 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:26:53.246309   36399 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:26:53.246459   36399 main.go:141] libmachine: (ha-322980) Calling .GetState
	I0505 21:26:53.248132   36399 fix.go:112] recreateIfNeeded on ha-322980: state=Running err=<nil>
	W0505 21:26:53.248160   36399 fix.go:138] unexpected machine state, will restart: <nil>
	I0505 21:26:53.251264   36399 out.go:177] * Updating the running kvm2 "ha-322980" VM ...
	I0505 21:26:53.252511   36399 machine.go:94] provisionDockerMachine start ...
	I0505 21:26:53.252536   36399 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:26:53.252737   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:26:53.255085   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:26:53.255500   36399 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:26:53.255526   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:26:53.255681   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:26:53.255852   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:26:53.256000   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:26:53.256133   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:26:53.256288   36399 main.go:141] libmachine: Using SSH client type: native
	I0505 21:26:53.256537   36399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0505 21:26:53.256551   36399 main.go:141] libmachine: About to run SSH command:
	hostname
	I0505 21:26:53.369308   36399 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-322980
	
	I0505 21:26:53.369346   36399 main.go:141] libmachine: (ha-322980) Calling .GetMachineName
	I0505 21:26:53.369606   36399 buildroot.go:166] provisioning hostname "ha-322980"
	I0505 21:26:53.369639   36399 main.go:141] libmachine: (ha-322980) Calling .GetMachineName
	I0505 21:26:53.369820   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:26:53.372637   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:26:53.373124   36399 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:26:53.373151   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:26:53.373370   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:26:53.373567   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:26:53.373735   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:26:53.373877   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:26:53.374056   36399 main.go:141] libmachine: Using SSH client type: native
	I0505 21:26:53.374277   36399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0505 21:26:53.374294   36399 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-322980 && echo "ha-322980" | sudo tee /etc/hostname
	I0505 21:26:53.506808   36399 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-322980
	
	I0505 21:26:53.506842   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:26:53.509223   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:26:53.509600   36399 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:26:53.509626   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:26:53.509814   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:26:53.509985   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:26:53.510157   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:26:53.510289   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:26:53.510416   36399 main.go:141] libmachine: Using SSH client type: native
	I0505 21:26:53.510579   36399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0505 21:26:53.510595   36399 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-322980' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-322980/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-322980' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0505 21:26:53.629485   36399 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0505 21:26:53.629511   36399 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18602-11466/.minikube CaCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18602-11466/.minikube}
	I0505 21:26:53.629528   36399 buildroot.go:174] setting up certificates
	I0505 21:26:53.629535   36399 provision.go:84] configureAuth start
	I0505 21:26:53.629551   36399 main.go:141] libmachine: (ha-322980) Calling .GetMachineName
	I0505 21:26:53.629801   36399 main.go:141] libmachine: (ha-322980) Calling .GetIP
	I0505 21:26:53.632716   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:26:53.633088   36399 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:26:53.633131   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:26:53.633288   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:26:53.635715   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:26:53.636140   36399 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:26:53.636167   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:26:53.636330   36399 provision.go:143] copyHostCerts
	I0505 21:26:53.636361   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem
	I0505 21:26:53.636406   36399 exec_runner.go:144] found /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem, removing ...
	I0505 21:26:53.636418   36399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem
	I0505 21:26:53.636502   36399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem (1078 bytes)
	I0505 21:26:53.636618   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem
	I0505 21:26:53.636644   36399 exec_runner.go:144] found /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem, removing ...
	I0505 21:26:53.636654   36399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem
	I0505 21:26:53.636691   36399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem (1123 bytes)
	I0505 21:26:53.636765   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem
	I0505 21:26:53.636795   36399 exec_runner.go:144] found /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem, removing ...
	I0505 21:26:53.636805   36399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem
	I0505 21:26:53.636837   36399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem (1675 bytes)
	I0505 21:26:53.636954   36399 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem org=jenkins.ha-322980 san=[127.0.0.1 192.168.39.178 ha-322980 localhost minikube]
	I0505 21:26:53.769238   36399 provision.go:177] copyRemoteCerts
	I0505 21:26:53.769301   36399 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0505 21:26:53.769337   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:26:53.772321   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:26:53.772662   36399 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:26:53.772698   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:26:53.772861   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:26:53.773067   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:26:53.773321   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:26:53.773466   36399 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa Username:docker}
	I0505 21:26:53.859548   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0505 21:26:53.859622   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0505 21:26:53.890248   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0505 21:26:53.890322   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0505 21:26:53.919935   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0505 21:26:53.919995   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0505 21:26:53.952579   36399 provision.go:87] duration metric: took 323.032938ms to configureAuth
	I0505 21:26:53.952610   36399 buildroot.go:189] setting minikube options for container-runtime
	I0505 21:26:53.952915   36399 config.go:182] Loaded profile config "ha-322980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 21:26:53.952991   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:26:53.955785   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:26:53.956181   36399 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:26:53.956212   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:26:53.956489   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:26:53.956663   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:26:53.956856   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:26:53.957020   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:26:53.957195   36399 main.go:141] libmachine: Using SSH client type: native
	I0505 21:26:53.957360   36399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0505 21:26:53.957381   36399 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0505 21:28:24.802156   36399 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0505 21:28:24.802179   36399 machine.go:97] duration metric: took 1m31.549649754s to provisionDockerMachine
	I0505 21:28:24.802191   36399 start.go:293] postStartSetup for "ha-322980" (driver="kvm2")
	I0505 21:28:24.802201   36399 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0505 21:28:24.802219   36399 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:28:24.802523   36399 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0505 21:28:24.802541   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:28:24.805857   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:28:24.806374   36399 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:28:24.806400   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:28:24.806574   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:28:24.806774   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:28:24.806947   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:28:24.807068   36399 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa Username:docker}
	I0505 21:28:24.897937   36399 ssh_runner.go:195] Run: cat /etc/os-release
	I0505 21:28:24.902998   36399 info.go:137] Remote host: Buildroot 2023.02.9
	I0505 21:28:24.903020   36399 filesync.go:126] Scanning /home/jenkins/minikube-integration/18602-11466/.minikube/addons for local assets ...
	I0505 21:28:24.903069   36399 filesync.go:126] Scanning /home/jenkins/minikube-integration/18602-11466/.minikube/files for local assets ...
	I0505 21:28:24.903140   36399 filesync.go:149] local asset: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem -> 187982.pem in /etc/ssl/certs
	I0505 21:28:24.903156   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem -> /etc/ssl/certs/187982.pem
	I0505 21:28:24.903230   36399 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0505 21:28:24.914976   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem --> /etc/ssl/certs/187982.pem (1708 bytes)
	I0505 21:28:24.942422   36399 start.go:296] duration metric: took 140.219842ms for postStartSetup
	I0505 21:28:24.942466   36399 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:28:24.942795   36399 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0505 21:28:24.942828   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:28:24.945241   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:28:24.945698   36399 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:28:24.945723   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:28:24.945879   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:28:24.946049   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:28:24.946187   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:28:24.946343   36399 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa Username:docker}
	W0505 21:28:25.031258   36399 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0505 21:28:25.031281   36399 fix.go:56] duration metric: took 1m31.80069046s for fixHost
	I0505 21:28:25.031302   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:28:25.033882   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:28:25.034222   36399 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:28:25.034253   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:28:25.034384   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:28:25.034608   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:28:25.034808   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:28:25.034979   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:28:25.035177   36399 main.go:141] libmachine: Using SSH client type: native
	I0505 21:28:25.035393   36399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0505 21:28:25.035405   36399 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0505 21:28:25.145055   36399 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714944505.115925429
	
	I0505 21:28:25.145080   36399 fix.go:216] guest clock: 1714944505.115925429
	I0505 21:28:25.145089   36399 fix.go:229] Guest: 2024-05-05 21:28:25.115925429 +0000 UTC Remote: 2024-05-05 21:28:25.031289392 +0000 UTC m=+91.939181071 (delta=84.636037ms)
	I0505 21:28:25.145109   36399 fix.go:200] guest clock delta is within tolerance: 84.636037ms
	I0505 21:28:25.145114   36399 start.go:83] releasing machines lock for "ha-322980", held for 1m31.914536671s
	I0505 21:28:25.145132   36399 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:28:25.145355   36399 main.go:141] libmachine: (ha-322980) Calling .GetIP
	I0505 21:28:25.147953   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:28:25.148359   36399 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:28:25.148378   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:28:25.148549   36399 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:28:25.149031   36399 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:28:25.149206   36399 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:28:25.149302   36399 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0505 21:28:25.149351   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:28:25.149450   36399 ssh_runner.go:195] Run: cat /version.json
	I0505 21:28:25.149476   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:28:25.152099   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:28:25.152175   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:28:25.152532   36399 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:28:25.152556   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:28:25.152579   36399 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:28:25.152591   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:28:25.152718   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:28:25.152853   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:28:25.152916   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:28:25.152986   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:28:25.153044   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:28:25.153100   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:28:25.153155   36399 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa Username:docker}
	I0505 21:28:25.153222   36399 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa Username:docker}
	I0505 21:28:25.262146   36399 ssh_runner.go:195] Run: systemctl --version
	I0505 21:28:25.269585   36399 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0505 21:28:25.445107   36399 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0505 21:28:25.452093   36399 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0505 21:28:25.452159   36399 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0505 21:28:25.462054   36399 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0505 21:28:25.462081   36399 start.go:494] detecting cgroup driver to use...
	I0505 21:28:25.462145   36399 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0505 21:28:25.479385   36399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0505 21:28:25.493826   36399 docker.go:217] disabling cri-docker service (if available) ...
	I0505 21:28:25.493881   36399 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0505 21:28:25.508310   36399 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0505 21:28:25.522866   36399 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0505 21:28:25.681241   36399 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0505 21:28:25.837193   36399 docker.go:233] disabling docker service ...
	I0505 21:28:25.837273   36399 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0505 21:28:25.854654   36399 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0505 21:28:25.869168   36399 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0505 21:28:26.021077   36399 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0505 21:28:26.172560   36399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0505 21:28:26.187950   36399 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0505 21:28:26.209945   36399 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0505 21:28:26.210011   36399 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:28:26.221767   36399 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0505 21:28:26.221821   36399 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:28:26.233242   36399 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:28:26.244526   36399 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:28:26.255938   36399 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0505 21:28:26.269084   36399 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:28:26.280325   36399 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:28:26.293020   36399 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:28:26.303829   36399 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0505 21:28:26.314019   36399 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0505 21:28:26.324025   36399 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 21:28:26.475013   36399 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0505 21:28:26.786010   36399 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0505 21:28:26.786082   36399 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0505 21:28:26.791904   36399 start.go:562] Will wait 60s for crictl version
	I0505 21:28:26.791958   36399 ssh_runner.go:195] Run: which crictl
	I0505 21:28:26.796301   36399 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0505 21:28:26.839834   36399 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0505 21:28:26.839910   36399 ssh_runner.go:195] Run: crio --version
	I0505 21:28:26.872417   36399 ssh_runner.go:195] Run: crio --version
	I0505 21:28:26.905097   36399 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0505 21:28:26.906534   36399 main.go:141] libmachine: (ha-322980) Calling .GetIP
	I0505 21:28:26.909264   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:28:26.909627   36399 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:28:26.909642   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:28:26.909860   36399 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0505 21:28:26.915241   36399 kubeadm.go:877] updating cluster {Name:ha-322980 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cl
usterName:ha-322980 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.178 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.29 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.169 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0505 21:28:26.915374   36399 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0505 21:28:26.915433   36399 ssh_runner.go:195] Run: sudo crictl images --output json
	I0505 21:28:26.965243   36399 crio.go:514] all images are preloaded for cri-o runtime.
	I0505 21:28:26.965271   36399 crio.go:433] Images already preloaded, skipping extraction
	I0505 21:28:26.965342   36399 ssh_runner.go:195] Run: sudo crictl images --output json
	I0505 21:28:27.008398   36399 crio.go:514] all images are preloaded for cri-o runtime.
	I0505 21:28:27.008421   36399 cache_images.go:84] Images are preloaded, skipping loading
	I0505 21:28:27.008433   36399 kubeadm.go:928] updating node { 192.168.39.178 8443 v1.30.0 crio true true} ...
	I0505 21:28:27.008545   36399 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-322980 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.178
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-322980 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0505 21:28:27.008627   36399 ssh_runner.go:195] Run: crio config
	I0505 21:28:27.062535   36399 cni.go:84] Creating CNI manager for ""
	I0505 21:28:27.062560   36399 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0505 21:28:27.062572   36399 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0505 21:28:27.062601   36399 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.178 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-322980 NodeName:ha-322980 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.178"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.178 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0505 21:28:27.062742   36399 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.178
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-322980"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.178
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.178"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0505 21:28:27.062764   36399 kube-vip.go:111] generating kube-vip config ...
	I0505 21:28:27.062801   36399 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0505 21:28:27.076515   36399 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0505 21:28:27.076654   36399 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0505 21:28:27.076721   36399 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0505 21:28:27.087275   36399 binaries.go:44] Found k8s binaries, skipping transfer
	I0505 21:28:27.087332   36399 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0505 21:28:27.097140   36399 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0505 21:28:27.115596   36399 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0505 21:28:27.133989   36399 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0505 21:28:27.152325   36399 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0505 21:28:27.171626   36399 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0505 21:28:27.176255   36399 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 21:28:27.333712   36399 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0505 21:28:27.351006   36399 certs.go:68] Setting up /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980 for IP: 192.168.39.178
	I0505 21:28:27.351031   36399 certs.go:194] generating shared ca certs ...
	I0505 21:28:27.351047   36399 certs.go:226] acquiring lock for ca certs: {Name:mk9a8f97724855697074b08194c6247f8f9e5c84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 21:28:27.351203   36399 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key
	I0505 21:28:27.351247   36399 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key
	I0505 21:28:27.351256   36399 certs.go:256] generating profile certs ...
	I0505 21:28:27.351322   36399 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/client.key
	I0505 21:28:27.351349   36399 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key.418ec019
	I0505 21:28:27.351360   36399 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt.418ec019 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.178 192.168.39.228 192.168.39.29 192.168.39.254]
	I0505 21:28:27.773033   36399 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt.418ec019 ...
	I0505 21:28:27.773068   36399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt.418ec019: {Name:mk074feb2c078ad2537bc4b0f4572ad95bc07b4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 21:28:27.773263   36399 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key.418ec019 ...
	I0505 21:28:27.773277   36399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key.418ec019: {Name:mk2665c22bdd3135504eab2bc878577f3cbff151 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 21:28:27.773371   36399 certs.go:381] copying /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt.418ec019 -> /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt
	I0505 21:28:27.773505   36399 certs.go:385] copying /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key.418ec019 -> /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key
	I0505 21:28:27.773631   36399 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.key
	I0505 21:28:27.773646   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0505 21:28:27.773658   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0505 21:28:27.773671   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0505 21:28:27.773683   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0505 21:28:27.773695   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0505 21:28:27.773707   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0505 21:28:27.773719   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0505 21:28:27.773731   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0505 21:28:27.773773   36399 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798.pem (1338 bytes)
	W0505 21:28:27.773800   36399 certs.go:480] ignoring /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798_empty.pem, impossibly tiny 0 bytes
	I0505 21:28:27.773809   36399 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem (1675 bytes)
	I0505 21:28:27.773829   36399 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem (1078 bytes)
	I0505 21:28:27.773850   36399 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem (1123 bytes)
	I0505 21:28:27.773870   36399 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem (1675 bytes)
	I0505 21:28:27.773905   36399 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem (1708 bytes)
	I0505 21:28:27.773929   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0505 21:28:27.773943   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798.pem -> /usr/share/ca-certificates/18798.pem
	I0505 21:28:27.773955   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem -> /usr/share/ca-certificates/187982.pem
	I0505 21:28:27.774493   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0505 21:28:27.804503   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0505 21:28:27.830821   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0505 21:28:27.858720   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0505 21:28:27.886328   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0505 21:28:27.912918   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0505 21:28:27.940090   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0505 21:28:27.967530   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0505 21:28:27.994650   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0505 21:28:28.022349   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798.pem --> /usr/share/ca-certificates/18798.pem (1338 bytes)
	I0505 21:28:28.049290   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem --> /usr/share/ca-certificates/187982.pem (1708 bytes)
	I0505 21:28:28.075642   36399 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0505 21:28:28.094413   36399 ssh_runner.go:195] Run: openssl version
	I0505 21:28:28.101667   36399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18798.pem && ln -fs /usr/share/ca-certificates/18798.pem /etc/ssl/certs/18798.pem"
	I0505 21:28:28.114593   36399 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18798.pem
	I0505 21:28:28.119911   36399 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  5 21:11 /usr/share/ca-certificates/18798.pem
	I0505 21:28:28.119966   36399 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18798.pem
	I0505 21:28:28.126513   36399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18798.pem /etc/ssl/certs/51391683.0"
	I0505 21:28:28.136871   36399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/187982.pem && ln -fs /usr/share/ca-certificates/187982.pem /etc/ssl/certs/187982.pem"
	I0505 21:28:28.148896   36399 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/187982.pem
	I0505 21:28:28.154099   36399 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  5 21:11 /usr/share/ca-certificates/187982.pem
	I0505 21:28:28.154153   36399 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/187982.pem
	I0505 21:28:28.160414   36399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/187982.pem /etc/ssl/certs/3ec20f2e.0"
	I0505 21:28:28.171000   36399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0505 21:28:28.184015   36399 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0505 21:28:28.189022   36399 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  5 20:58 /usr/share/ca-certificates/minikubeCA.pem
	I0505 21:28:28.189068   36399 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0505 21:28:28.196002   36399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0505 21:28:28.206271   36399 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0505 21:28:28.211552   36399 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0505 21:28:28.218198   36399 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0505 21:28:28.224606   36399 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0505 21:28:28.230931   36399 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0505 21:28:28.237169   36399 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0505 21:28:28.243293   36399 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0505 21:28:28.249553   36399 kubeadm.go:391] StartCluster: {Name:ha-322980 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clust
erName:ha-322980 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.178 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.29 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.169 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 21:28:28.249672   36399 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0505 21:28:28.249724   36399 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0505 21:28:28.296303   36399 cri.go:89] found id: "e643f88ce68e29460e940448779ea8b8b309d24d97a13d57fe0b3139f920999a"
	I0505 21:28:28.296320   36399 cri.go:89] found id: "31d5340e9679504cad0e8fc998a460f07a03ad902d57ee2dea4946953cbad32d"
	I0505 21:28:28.296324   36399 cri.go:89] found id: "e6747aa9368ee1e6895cb4bf1eed8173977dc9bddfc0ea1b03750a3d23697184"
	I0505 21:28:28.296327   36399 cri.go:89] found id: "7894a12a0cfac62f67b7770ea3e5c8dbc28723b9c7c40b415fcdcf36899ac17d"
	I0505 21:28:28.296330   36399 cri.go:89] found id: "8f325a9ea25d6ff0517a638bff175fe1f4c646916941e4d3a93f5ff6f13f0187"
	I0505 21:28:28.296333   36399 cri.go:89] found id: "0b360d142570dfeb8a0253680ccfd97861e90b700b1ce9e2e5b315244aed2a3b"
	I0505 21:28:28.296335   36399 cri.go:89] found id: "e065fafa4b7aad342c5755ddbb2c69c3b4bd0efd44c9ce792388bc6c6f06121d"
	I0505 21:28:28.296338   36399 cri.go:89] found id: "63d1d40ce592576b3c3adab70629f977f025cc822b6fc2638f0afd5a8034b355"
	I0505 21:28:28.296340   36399 cri.go:89] found id: "4da23c67204614119fd12bb11bb2dcc384609fe399a32f06e2ca61ff52a8438c"
	I0505 21:28:28.296347   36399 cri.go:89] found id: "abf4aae19a40108f61080f90924e47cd17198b595de08afa53c852acb001992f"
	I0505 21:28:28.296349   36399 cri.go:89] found id: "d73ef383ce1ab83dd2b2ada78891a4aa3835de2b4805157ac3e15584d0b4b29b"
	I0505 21:28:28.296353   36399 cri.go:89] found id: "b13d21aa2e8e79d9186e7d57f6c9dbcafdde76053f5205ee5d1bb46c65960d4f"
	I0505 21:28:28.296359   36399 cri.go:89] found id: "97769959b22d6e891106c90e8ae981d40cb02ac960e23bdfb007b3c50d50c923"
	I0505 21:28:28.296363   36399 cri.go:89] found id: "6ebcc8c1017ed40ef18eba191c6a6df12e34304aa025d65f0340c08e107ac43d"
	I0505 21:28:28.296369   36399 cri.go:89] found id: ""
	I0505 21:28:28.296419   36399 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	May 05 21:30:14 ha-322980 crio[3885]: time="2024-05-05 21:30:14.656969651Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:a6a90eca6999f3420eb8cc58e5af0b595de2c1bbf36f04f08d7edea98e8cef1d,Verbose:false,}" file="otel-collector/interceptors.go:62" id=9c72baf2-cafd-4c33-9133-1f7dfc92eea8 name=/runtime.v1.RuntimeService/ContainerStatus
	May 05 21:30:14 ha-322980 crio[3885]: time="2024-05-05 21:30:14.657122157Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:a6a90eca6999f3420eb8cc58e5af0b595de2c1bbf36f04f08d7edea98e8cef1d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},State:CONTAINER_RUNNING,CreatedAt:1714944552448513599,StartedAt:1714944552498852250,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-apiserver:v1.30.0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25cdcec1c37ba86157b0b42297dfe2cf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf39325,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/25cdcec1c37ba86157b0b42297dfe2cf/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/25cdcec1c37ba86157b0b42297dfe2cf/containers/kube-apiserver/a6af9745,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib
/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-apiserver-ha-322980_25cdcec1c37ba86157b0b42297dfe2cf/kube-apiserver/3.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:256,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=9c72baf2-cafd-4c33-9133-1f7dfc92eea8 name=/runtime.v1.RuntimeService/ContainerStatus
	May 05 21:30:14 ha-322980 crio[3885]: time="2024-05-05 21:30:14.657523682Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:378349efe1d23552d4480f47eb5b8f2985a9b9afe0582e8a15ae7bd295575d7f,Verbose:false,}" file="otel-collector/interceptors.go:62" id=42f52448-8748-4c91-b886-8c362bdd8f1d name=/runtime.v1.RuntimeService/ContainerStatus
	May 05 21:30:14 ha-322980 crio[3885]: time="2024-05-05 21:30:14.657903460Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:378349efe1d23552d4480f47eb5b8f2985a9b9afe0582e8a15ae7bd295575d7f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1714944545760656567,StartedAt:1714944545789324880,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox:1.28,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xt9l5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bbde9685-4494-40b7-bd53-9452fd970f5a,},Annotations:map[string]string{io.kubernetes.container.hash: ce6d6b7c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termi
nationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/bbde9685-4494-40b7-bd53-9452fd970f5a/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/bbde9685-4494-40b7-bd53-9452fd970f5a/containers/busybox/e61d9f8e,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/bbde9685-4494-40b7-bd53-9452fd970f5a/volumes/kubernetes.io~projected/kube-api-access-vgqcw,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/default_busybox-fc5497c4f-xt9l5_bbde9685-4494-40b7-bd53-9452fd970f5a/busybox/1.log,Resources:&Container
Resources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:1000,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=42f52448-8748-4c91-b886-8c362bdd8f1d name=/runtime.v1.RuntimeService/ContainerStatus
	May 05 21:30:14 ha-322980 crio[3885]: time="2024-05-05 21:30:14.658272020Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:ea2d43ee9b97e09ed5e375d1d8edb724b3d836888d6a9fcd5ccb75b32e5d1424,Verbose:false,}" file="otel-collector/interceptors.go:62" id=f4951866-bba6-43f6-a11c-191feae5cd39 name=/runtime.v1.RuntimeService/ContainerStatus
	May 05 21:30:14 ha-322980 crio[3885]: time="2024-05-05 21:30:14.658393829Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:ea2d43ee9b97e09ed5e375d1d8edb724b3d836888d6a9fcd5ccb75b32e5d1424,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1714944524546981147,StartedAt:1714944524576881832,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip:v0.7.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4b10859196db0958fa2b1c992ad5e8a,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminat
ionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/b4b10859196db0958fa2b1c992ad5e8a/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/b4b10859196db0958fa2b1c992ad5e8a/containers/kube-vip/7f69cb27,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/admin.conf,HostPath:/etc/kubernetes/admin.conf,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-vip-ha-322980_b4b10859196db0958fa2b1c992ad5e8a/kube-vip/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:1000
,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=f4951866-bba6-43f6-a11c-191feae5cd39 name=/runtime.v1.RuntimeService/ContainerStatus
	May 05 21:30:14 ha-322980 crio[3885]: time="2024-05-05 21:30:14.658890185Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:852f56752c6434b6c374dd0257366bf739e210aee07f80277d63776fd528299b,Verbose:false,}" file="otel-collector/interceptors.go:62" id=5f64a9aa-b6fd-491b-b194-68b5b02f868b name=/runtime.v1.RuntimeService/ContainerStatus
	May 05 21:30:14 ha-322980 crio[3885]: time="2024-05-05 21:30:14.659054623Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:852f56752c6434b6c374dd0257366bf739e210aee07f80277d63776fd528299b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1714944513345110243,StartedAt:1714944513530078177,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-proxy:v1.30.0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xdzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0b6492d-c0f5-45dd-8482-c447b81daa66,},Annotations:map[string]string{io.kubernetes.container.hash: 9b4684e6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.
terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/run/xtables.lock,HostPath:/run/xtables.lock,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/lib/modules,HostPath:/lib/modules,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/d0b6492d-c0f5-45dd-8482-c447b81daa66/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/d0b6492d-c0f5-45dd-8482-c447b81daa66/containers/kube-proxy/97c041dc,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/kube-proxy,HostPath:/var/lib/kub
elet/pods/d0b6492d-c0f5-45dd-8482-c447b81daa66/volumes/kubernetes.io~configmap/kube-proxy,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/d0b6492d-c0f5-45dd-8482-c447b81daa66/volumes/kubernetes.io~projected/kube-api-access-vd6kl,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-proxy-8xdzd_d0b6492d-c0f5-45dd-8482-c447b81daa66/kube-proxy/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collecto
r/interceptors.go:74" id=5f64a9aa-b6fd-491b-b194-68b5b02f868b name=/runtime.v1.RuntimeService/ContainerStatus
	May 05 21:30:14 ha-322980 crio[3885]: time="2024-05-05 21:30:14.659363784Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:067837019b5f60ac2139d714d999a7c2c585578da4dc3279ac134cd69b882db6,Verbose:false,}" file="otel-collector/interceptors.go:62" id=d133f083-5243-4c73-9042-41f988c9860c name=/runtime.v1.RuntimeService/ContainerStatus
	May 05 21:30:14 ha-322980 crio[3885]: time="2024-05-05 21:30:14.659456106Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:067837019b5f60ac2139d714d999a7c2c585578da4dc3279ac134cd69b882db6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1714944513094099179,StartedAt:1714944513153120390,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/coredns/coredns:v1.11.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fqt45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27bdadca-f49c-4f50-b09c-07dd6067f39a,},Annotations:map[string]string{io.kubernetes.container.hash: 8fa26f22,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"container
Port\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/coredns,HostPath:/var/lib/kubelet/pods/27bdadca-f49c-4f50-b09c-07dd6067f39a/volumes/kubernetes.io~configmap/config-volume,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/27bdadca-f49c-4f50-b09c-07dd6067f39a/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/27bdadca-f49c-4f50-b09c-07dd6067f39a/containers/coredns/6da63d7d,Readonly:false,SelinuxRelabel:false,Propagation:PROPAG
ATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/27bdadca-f49c-4f50-b09c-07dd6067f39a/volumes/kubernetes.io~projected/kube-api-access-dd5lf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_coredns-7db6d8ff4d-fqt45_27bdadca-f49c-4f50-b09c-07dd6067f39a/coredns/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:178257920,OomScoreAdj:967,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:178257920,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=d133f083-5243-4c73-9042-41f988c9860c name=/runtime.v1.RuntimeService/ContainerStatus
	May 05 21:30:14 ha-322980 crio[3885]: time="2024-05-05 21:30:14.659915065Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:858ab02f25618649e61d3cb75cf0fbd9a3bfbbbd2a9556b0257647ec91b0c2b8,Verbose:false,}" file="otel-collector/interceptors.go:62" id=80ea36e8-60d0-4f27-b34c-bd6d6c193ad7 name=/runtime.v1.RuntimeService/ContainerStatus
	May 05 21:30:14 ha-322980 crio[3885]: time="2024-05-05 21:30:14.660046934Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:858ab02f25618649e61d3cb75cf0fbd9a3bfbbbd2a9556b0257647ec91b0c2b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1714944512735922255,StartedAt:1714944512912868566,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/coredns/coredns:v1.11.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-78zmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e066e3ad-0574-44f9-acab-d7cec8b86788,},Annotations:map[string]string{io.kubernetes.container.hash: 3cdac550,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"container
Port\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/coredns,HostPath:/var/lib/kubelet/pods/e066e3ad-0574-44f9-acab-d7cec8b86788/volumes/kubernetes.io~configmap/config-volume,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/e066e3ad-0574-44f9-acab-d7cec8b86788/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/e066e3ad-0574-44f9-acab-d7cec8b86788/containers/coredns/a790f7f0,Readonly:false,SelinuxRelabel:false,Propagation:PROPAG
ATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/e066e3ad-0574-44f9-acab-d7cec8b86788/volumes/kubernetes.io~projected/kube-api-access-h7b48,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_coredns-7db6d8ff4d-78zmw_e066e3ad-0574-44f9-acab-d7cec8b86788/coredns/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:178257920,OomScoreAdj:967,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:178257920,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=80ea36e8-60d0-4f27-b34c-bd6d6c193ad7 name=/runtime.v1.RuntimeService/ContainerStatus
	May 05 21:30:14 ha-322980 crio[3885]: time="2024-05-05 21:30:14.660449320Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:d864b4fda0bb945209c47a2404e8a012f5dac000a178c964de8c1e8cc8cb9a9f,Verbose:false,}" file="otel-collector/interceptors.go:62" id=79510687-a3dc-4f89-9258-6465ca6a03cd name=/runtime.v1.RuntimeService/ContainerStatus
	May 05 21:30:14 ha-322980 crio[3885]: time="2024-05-05 21:30:14.660569923Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:d864b4fda0bb945209c47a2404e8a012f5dac000a178c964de8c1e8cc8cb9a9f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1714944512598388906,StartedAt:1714944512936515533,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-scheduler:v1.30.0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c588feae7d6204945d27bedaf4541d64,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/c588feae7d6204945d27bedaf4541d64/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/c588feae7d6204945d27bedaf4541d64/containers/kube-scheduler/487bc73d,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/scheduler.conf,HostPath:/etc/kubernetes/scheduler.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-scheduler-ha-322980_c588feae7d6204945d27bedaf4541d64/kube-scheduler/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,
CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=79510687-a3dc-4f89-9258-6465ca6a03cd name=/runtime.v1.RuntimeService/ContainerStatus
	May 05 21:30:14 ha-322980 crio[3885]: time="2024-05-05 21:30:14.660962774Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:366a7799ffc65796c54bbea26b2f81953df8a6b0fd8399e51c1f3cf1f374ff2a,Verbose:false,}" file="otel-collector/interceptors.go:62" id=cc73eacf-a952-4ae9-965d-33b81aa0300a name=/runtime.v1.RuntimeService/ContainerStatus
	May 05 21:30:14 ha-322980 crio[3885]: time="2024-05-05 21:30:14.661167453Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:366a7799ffc65796c54bbea26b2f81953df8a6b0fd8399e51c1f3cf1f374ff2a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1714944512575271293,StartedAt:1714944512763020069,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/etcd:3.5.12-0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58f12977082107510fdbb696cd218155,},Annotations:map[string]string{io.kubernetes.container.hash: a7c6c285,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolic
y: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/58f12977082107510fdbb696cd218155/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/58f12977082107510fdbb696cd218155/containers/etcd/0d62bc8e,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/etcd,HostPath:/var/lib/minikube/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs/etcd,HostPath:/var/lib/minikube/certs/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_etcd-ha-322980_58f12
977082107510fdbb696cd218155/etcd/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=cc73eacf-a952-4ae9-965d-33b81aa0300a name=/runtime.v1.RuntimeService/ContainerStatus
	May 05 21:30:14 ha-322980 crio[3885]: time="2024-05-05 21:30:14.685705281Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d1a24c6d-b66a-4abf-b44e-0f18ab9fdf6d name=/runtime.v1.RuntimeService/Version
	May 05 21:30:14 ha-322980 crio[3885]: time="2024-05-05 21:30:14.685871670Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d1a24c6d-b66a-4abf-b44e-0f18ab9fdf6d name=/runtime.v1.RuntimeService/Version
	May 05 21:30:14 ha-322980 crio[3885]: time="2024-05-05 21:30:14.687358066Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b2b0d68e-959b-4c93-a0ef-2855e9d3a215 name=/runtime.v1.ImageService/ImageFsInfo
	May 05 21:30:14 ha-322980 crio[3885]: time="2024-05-05 21:30:14.688000248Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714944614687971374,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b2b0d68e-959b-4c93-a0ef-2855e9d3a215 name=/runtime.v1.ImageService/ImageFsInfo
	May 05 21:30:14 ha-322980 crio[3885]: time="2024-05-05 21:30:14.688717657Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aa2d4931-1c94-46ae-ade2-bb5e49844705 name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:30:14 ha-322980 crio[3885]: time="2024-05-05 21:30:14.688837542Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aa2d4931-1c94-46ae-ade2-bb5e49844705 name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:30:14 ha-322980 crio[3885]: time="2024-05-05 21:30:14.689260961Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d64f6490c58bcaabe8dcfd1f6189e1e9b6ea82598e341f43024f2a406807c0bd,PodSandboxId:68ad3ff729cb28bf9363bf24198ca022f340f699f2cf73be3068a1a5c78df7fd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714944604405013234,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc212ac3-7499-4edc-b5a5-622b0bd4a891,},Annotations:map[string]string{io.kubernetes.container.hash: c207764,io.kubernetes.container.restartCount: 4,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e5582057ffa7695636c3c49a29af71b47c3e0e49d6d4f28aed3cb84503b54e,PodSandboxId:64801e377a379e3032b60fb4e259d498edbd40bb44d06239b6567a8489f97eef,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714944583390637724,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lwtnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4033535e-69f1-426c-bb17-831fad6336d5,},Annotations:map[string]string{io.kubernetes.container.hash: 5400a271,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b48ee84cd3ceb82bcfda671b85eb4b1b2793a18718f5c445445feaf19173b3d9,PodSandboxId:95e07dfd571487a2a6cf7710a0ca46ae125d20737d36ddd7c9a44fb93a9c51a9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714944558414000718,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 578ccf60a9d00c195d5069c63fb0b319,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6a90eca6999f3420eb8cc58e5af0b595de2c1bbf36f04f08d7edea98e8cef1d,PodSandboxId:e684baf5ef11a979a8c30779f4f6f64f196ac4dbc0c9b122734952ac705ce180,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714944552394068781,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25cdcec1c37ba86157b0b42297dfe2cf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf39325,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c012cc95d188bdded0cf101970a7bcf34d1c2860b11847a187db706e95a0138,PodSandboxId:68ad3ff729cb28bf9363bf24198ca022f340f699f2cf73be3068a1a5c78df7fd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714944550398281276,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc212ac3-7499-4edc-b5a5-622b0bd4a891,},Annotations:map[string]string{io.kubernetes.container.hash: c207764,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:378349efe1d23552d4480f47eb5b8f2985a9b9afe0582e8a15ae7bd295575d7f,PodSandboxId:9dfb38e6022a77884fc7f07c6d48d0ba1ff1031b9546850ba613c1a67c6eefa0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714944545718355104,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xt9l5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bbde9685-4494-40b7-bd53-9452fd970f5a,},Annotations:map[string]string{io.kubernetes.container.hash: ce6d6b7c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,i
o.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea2d43ee9b97e09ed5e375d1d8edb724b3d836888d6a9fcd5ccb75b32e5d1424,PodSandboxId:8e6a479fdea9d5e48225f93fa886dde1f176a498a35aebb5d75e093f2ca98e92,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714944524505500895,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4b10859196db0958fa2b1c992ad5e8a,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod
: 30,},},&Container{Id:852f56752c6434b6c374dd0257366bf739e210aee07f80277d63776fd528299b,PodSandboxId:e36e99eaa4a61ccc966411e5a4efcee570acc74235b8c9da766e66844f3e432c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714944512450633114,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xdzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0b6492d-c0f5-45dd-8482-c447b81daa66,},Annotations:map[string]string{io.kubernetes.container.hash: 9b4684e6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e9a9bfcaf
10e05ee630f2bf1bb282bdfa63fde7a77a450390e8b38f43e429d4,PodSandboxId:64801e377a379e3032b60fb4e259d498edbd40bb44d06239b6567a8489f97eef,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714944512763958489,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lwtnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4033535e-69f1-426c-bb17-831fad6336d5,},Annotations:map[string]string{io.kubernetes.container.hash: 5400a271,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:067837019b5f60ac2139d714d999a7c2c585578da4dc3
279ac134cd69b882db6,PodSandboxId:d0370265c798af275a606870dfff8f094028af5afdb5a22c6825e212c9d3197f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714944512881106094,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fqt45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27bdadca-f49c-4f50-b09c-07dd6067f39a,},Annotations:map[string]string{io.kubernetes.container.hash: 8fa26f22,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06be80792a085f9dea48d4219f306149664769dd55a7c6bc24d984254df5fc7d,PodSandboxId:95e07dfd571487a2a6cf7710a0ca46ae125d20737d36ddd7c9a44fb93a9c51a9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714944512620038370,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 578ccf60a9d00c195d5069c63fb0b319,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:858ab02f25618649e61d3cb75cf0fbd9a3bfbbbd2a9556b0257647ec91b0c2b8,PodSandboxId:cd2a674999e8ab59ec8a5f057e40ce67c67724010b62a201c6abe9e71f6a3a30,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714944512601051435,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-78zmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e066e3ad-0574-44f9-acab-d7cec8b86788,},Annotations:map[string]string{io.kubernetes.container.hash: 3cdac550,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":
\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d864b4fda0bb945209c47a2404e8a012f5dac000a178c964de8c1e8cc8cb9a9f,PodSandboxId:4777f05174b29b55270c89c82d4189567c9b871410dfc1754a18984bcfbff1a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714944512361230033,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5
88feae7d6204945d27bedaf4541d64,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:366a7799ffc65796c54bbea26b2f81953df8a6b0fd8399e51c1f3cf1f374ff2a,PodSandboxId:55b2bc86d17b354ac6e14cdd0acbe830b855d4ee17bd24922dff0f39203830a4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714944512349273773,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58f12977082107510fdbb696cd218155,},Annotations:map[st
ring]string{io.kubernetes.container.hash: a7c6c285,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1ff1f42ee456adbb1ab902c56155f156eb2f298a79dc46f7d316794adc69f37,PodSandboxId:e684baf5ef11a979a8c30779f4f6f64f196ac4dbc0c9b122734952ac705ce180,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714944512314841248,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25cdcec1c37ba86157b0b42297dfe2cf,},Annotations:map[string]string{io.kuberne
tes.container.hash: cdf39325,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9743f3da0de5672bc067b03d1bf5a1bd2b516c0135ee81ae43c0d2cab9bcfdf,PodSandboxId:238b5b24a572eba24b52bf72fafa80a3a1105acc4ba0c3e58b255a5c6a8a7bc2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714943992555012726,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xt9l5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bbde9685-4494-40b7-bd53-9452fd970f5a,},Annotations:map[string]string{io.kubernete
s.container.hash: ce6d6b7c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e065fafa4b7aad342c5755ddbb2c69c3b4bd0efd44c9ce792388bc6c6f06121d,PodSandboxId:9f56aff0e5f86dabc75e935bba2e8f81a5f7d30f2613ed055fe026fab10ff612,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714943788390149653,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-78zmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e066e3ad-0574-44f9-acab-d7cec8b86788,},Annotations:map[string]string{io.kubernetes.container.hash: 3cdac550,io.ku
bernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b360d142570dfeb8a0253680ccfd97861e90b700b1ce9e2e5b315244aed2a3b,PodSandboxId:cd560b1055b35f9f6500b5eef53e7f8250f4bc20a4f8e6562e8b30bff6235e38,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714943788394548523,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-
7db6d8ff4d-fqt45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27bdadca-f49c-4f50-b09c-07dd6067f39a,},Annotations:map[string]string{io.kubernetes.container.hash: 8fa26f22,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4da23c67204614119fd12bb11bb2dcc384609fe399a32f06e2ca61ff52a8438c,PodSandboxId:8b3a42343ade0d10dbd7caf52ae4583f67790d020c606261ab52cb7ceda9cd0a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431
fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714943786049319375,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xdzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0b6492d-c0f5-45dd-8482-c447b81daa66,},Annotations:map[string]string{io.kubernetes.container.hash: 9b4684e6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d73ef383ce1ab83dd2b2ada78891a4aa3835de2b4805157ac3e15584d0b4b29b,PodSandboxId:913466e1710aa2d1fa8e45f17fd08f82c114b7242bff697822c1bf5d2e0b7a3c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8
b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714943765197403698,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c588feae7d6204945d27bedaf4541d64,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97769959b22d6e891106c90e8ae981d40cb02ac960e23bdfb007b3c50d50c923,PodSandboxId:01d81d8dc3bcbbd11e9a335f2a7aee378da1d273b127ffeb0f72b10080cb6bab,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAIN
ER_EXITED,CreatedAt:1714943765081696946,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58f12977082107510fdbb696cd218155,},Annotations:map[string]string{io.kubernetes.container.hash: a7c6c285,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aa2d4931-1c94-46ae-ade2-bb5e49844705 name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:30:14 ha-322980 crio[3885]: time="2024-05-05 21:30:14.693572638Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=14ceac04-5757-45cc-94c9-d10e602527f3 name=/runtime.v1.RuntimeService/Status
	May 05 21:30:14 ha-322980 crio[3885]: time="2024-05-05 21:30:14.693693406Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=14ceac04-5757-45cc-94c9-d10e602527f3 name=/runtime.v1.RuntimeService/Status
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	d64f6490c58bc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 seconds ago       Running             storage-provisioner       4                   68ad3ff729cb2       storage-provisioner
	d8e5582057ffa       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      31 seconds ago       Running             kindnet-cni               4                   64801e377a379       kindnet-lwtnx
	b48ee84cd3ceb       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      56 seconds ago       Running             kube-controller-manager   2                   95e07dfd57148       kube-controller-manager-ha-322980
	a6a90eca6999f       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      About a minute ago   Running             kube-apiserver            3                   e684baf5ef11a       kube-apiserver-ha-322980
	0c012cc95d188       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Exited              storage-provisioner       3                   68ad3ff729cb2       storage-provisioner
	378349efe1d23       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   9dfb38e6022a7       busybox-fc5497c4f-xt9l5
	ea2d43ee9b97e       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      About a minute ago   Running             kube-vip                  0                   8e6a479fdea9d       kube-vip-ha-322980
	067837019b5f6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   d0370265c798a       coredns-7db6d8ff4d-fqt45
	2e9a9bfcaf10e       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Exited              kindnet-cni               3                   64801e377a379       kindnet-lwtnx
	06be80792a085       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      About a minute ago   Exited              kube-controller-manager   1                   95e07dfd57148       kube-controller-manager-ha-322980
	858ab02f25618       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   cd2a674999e8a       coredns-7db6d8ff4d-78zmw
	852f56752c643       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      About a minute ago   Running             kube-proxy                1                   e36e99eaa4a61       kube-proxy-8xdzd
	d864b4fda0bb9       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      About a minute ago   Running             kube-scheduler            1                   4777f05174b29       kube-scheduler-ha-322980
	366a7799ffc65       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   55b2bc86d17b3       etcd-ha-322980
	d1ff1f42ee456       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      About a minute ago   Exited              kube-apiserver            2                   e684baf5ef11a       kube-apiserver-ha-322980
	d9743f3da0de5       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   238b5b24a572e       busybox-fc5497c4f-xt9l5
	0b360d142570d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   cd560b1055b35       coredns-7db6d8ff4d-fqt45
	e065fafa4b7aa       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   9f56aff0e5f86       coredns-7db6d8ff4d-78zmw
	4da23c6720461       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      13 minutes ago       Exited              kube-proxy                0                   8b3a42343ade0       kube-proxy-8xdzd
	d73ef383ce1ab       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      14 minutes ago       Exited              kube-scheduler            0                   913466e1710aa       kube-scheduler-ha-322980
	97769959b22d6       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      14 minutes ago       Exited              etcd                      0                   01d81d8dc3bcb       etcd-ha-322980
	
	
	==> coredns [067837019b5f60ac2139d714d999a7c2c585578da4dc3279ac134cd69b882db6] <==
	Trace[1013525662]: [10.737662485s] [10.737662485s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:58534->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:58550->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1024418272]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (05-May-2024 21:28:44.711) (total time: 13247ms):
	Trace[1024418272]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:58550->10.96.0.1:443: read: connection reset by peer 13247ms (21:28:57.959)
	Trace[1024418272]: [13.247425762s] [13.247425762s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:58550->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [0b360d142570dfeb8a0253680ccfd97861e90b700b1ce9e2e5b315244aed2a3b] <==
	[INFO] 10.244.1.2:51278 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00017965s
	[INFO] 10.244.1.2:37849 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000301689s
	[INFO] 10.244.0.4:58808 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118281s
	[INFO] 10.244.0.4:59347 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000074943s
	[INFO] 10.244.0.4:44264 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000127442s
	[INFO] 10.244.0.4:45870 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001035173s
	[INFO] 10.244.0.4:45397 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000126149s
	[INFO] 10.244.2.2:38985 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000241724s
	[INFO] 10.244.1.2:41200 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000185837s
	[INFO] 10.244.0.4:53459 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000188027s
	[INFO] 10.244.0.4:43760 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000146395s
	[INFO] 10.244.2.2:45375 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000112163s
	[INFO] 10.244.2.2:60638 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000225418s
	[INFO] 10.244.1.2:33012 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000251463s
	[INFO] 10.244.0.4:48613 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000079688s
	[INFO] 10.244.0.4:54870 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000050324s
	[INFO] 10.244.0.4:36700 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000167489s
	[INFO] 10.244.0.4:56859 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000077358s
	[INFO] 10.244.2.2:37153 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122063s
	[INFO] 10.244.2.2:43717 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000123902s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [858ab02f25618649e61d3cb75cf0fbd9a3bfbbbd2a9556b0257647ec91b0c2b8] <==
	[INFO] plugin/kubernetes: Trace[1109309593]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (05-May-2024 21:28:44.662) (total time: 10532ms):
	Trace[1109309593]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:45080->10.96.0.1:443: read: connection reset by peer 10530ms (21:28:55.192)
	Trace[1109309593]: [10.532390183s] [10.532390183s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:45080->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:45066->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1560009575]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (05-May-2024 21:28:44.458) (total time: 13499ms):
	Trace[1560009575]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:45066->10.96.0.1:443: read: connection reset by peer 13499ms (21:28:57.958)
	Trace[1560009575]: [13.499965503s] [13.499965503s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:45066->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [e065fafa4b7aad342c5755ddbb2c69c3b4bd0efd44c9ce792388bc6c6f06121d] <==
	[INFO] 10.244.0.4:43928 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00004146s
	[INFO] 10.244.2.2:44358 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001832218s
	[INFO] 10.244.2.2:34081 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00017944s
	[INFO] 10.244.2.2:36047 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000087749s
	[INFO] 10.244.2.2:60557 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001143135s
	[INFO] 10.244.2.2:60835 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000073052s
	[INFO] 10.244.2.2:42876 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000093376s
	[INFO] 10.244.2.2:33057 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000070619s
	[INFO] 10.244.1.2:41910 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00009436s
	[INFO] 10.244.1.2:43839 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082555s
	[INFO] 10.244.1.2:39008 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000075851s
	[INFO] 10.244.0.4:47500 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000110566s
	[INFO] 10.244.0.4:44728 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071752s
	[INFO] 10.244.2.2:38205 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000222144s
	[INFO] 10.244.2.2:46321 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000164371s
	[INFO] 10.244.1.2:41080 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000205837s
	[INFO] 10.244.1.2:58822 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000264144s
	[INFO] 10.244.1.2:55995 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000174393s
	[INFO] 10.244.2.2:46471 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00069286s
	[INFO] 10.244.2.2:52414 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000163744s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-322980
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-322980
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=182cbbc99574885c654f8e32902368a71f76ddd3
	                    minikube.k8s.io/name=ha-322980
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_05T21_16_15_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 05 May 2024 21:16:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-322980
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 05 May 2024 21:30:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 05 May 2024 21:29:18 +0000   Sun, 05 May 2024 21:16:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 05 May 2024 21:29:18 +0000   Sun, 05 May 2024 21:16:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 05 May 2024 21:29:18 +0000   Sun, 05 May 2024 21:16:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 05 May 2024 21:29:18 +0000   Sun, 05 May 2024 21:16:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.178
	  Hostname:    ha-322980
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3a019ec328ab467ca04365748baaa367
	  System UUID:                3a019ec3-28ab-467c-a043-65748baaa367
	  Boot ID:                    c9018f9a-79b9-43c5-a307-9ae120187dfb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xt9l5              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 coredns-7db6d8ff4d-78zmw             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-7db6d8ff4d-fqt45             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-ha-322980                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-lwtnx                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-322980             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-ha-322980    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-8xdzd                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-322980             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-322980                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 13m                  kube-proxy       
	  Normal   Starting                 59s                  kube-proxy       
	  Normal   NodeHasSufficientPID     14m                  kubelet          Node ha-322980 status is now: NodeHasSufficientPID
	  Normal   Starting                 14m                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  14m                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  14m                  kubelet          Node ha-322980 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m                  kubelet          Node ha-322980 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           13m                  node-controller  Node ha-322980 event: Registered Node ha-322980 in Controller
	  Normal   NodeReady                13m                  kubelet          Node ha-322980 status is now: NodeReady
	  Normal   RegisteredNode           11m                  node-controller  Node ha-322980 event: Registered Node ha-322980 in Controller
	  Normal   RegisteredNode           10m                  node-controller  Node ha-322980 event: Registered Node ha-322980 in Controller
	  Warning  ContainerGCFailed        2m1s (x2 over 3m1s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           50s                  node-controller  Node ha-322980 event: Registered Node ha-322980 in Controller
	  Normal   RegisteredNode           44s                  node-controller  Node ha-322980 event: Registered Node ha-322980 in Controller
	
	
	Name:               ha-322980-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-322980-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=182cbbc99574885c654f8e32902368a71f76ddd3
	                    minikube.k8s.io/name=ha-322980
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_05T21_18_15_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 05 May 2024 21:18:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-322980-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 05 May 2024 21:30:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 05 May 2024 21:30:00 +0000   Sun, 05 May 2024 21:29:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 05 May 2024 21:30:00 +0000   Sun, 05 May 2024 21:29:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 05 May 2024 21:30:00 +0000   Sun, 05 May 2024 21:29:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 05 May 2024 21:30:00 +0000   Sun, 05 May 2024 21:29:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.228
	  Hostname:    ha-322980-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c5d1651406694de39b61eff245fccb61
	  System UUID:                c5d16514-0669-4de3-9b61-eff245fccb61
	  Boot ID:                    c80c8e5b-42c8-42c0-ad77-611ed5db7d30
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-tbmdd                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-322980-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-lmgkm                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-322980-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-322980-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-wbf7q                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-322980-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-322980-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 51s                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node ha-322980-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node ha-322980-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node ha-322980-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           12m                node-controller  Node ha-322980-m02 event: Registered Node ha-322980-m02 in Controller
	  Normal  RegisteredNode           11m                node-controller  Node ha-322980-m02 event: Registered Node ha-322980-m02 in Controller
	  Normal  RegisteredNode           10m                node-controller  Node ha-322980-m02 event: Registered Node ha-322980-m02 in Controller
	  Normal  NodeNotReady             8m8s               node-controller  Node ha-322980-m02 status is now: NodeNotReady
	  Normal  NodeHasSufficientMemory  84s (x8 over 84s)  kubelet          Node ha-322980-m02 status is now: NodeHasSufficientMemory
	  Normal  Starting                 84s                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    84s (x8 over 84s)  kubelet          Node ha-322980-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     84s (x7 over 84s)  kubelet          Node ha-322980-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  84s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           50s                node-controller  Node ha-322980-m02 event: Registered Node ha-322980-m02 in Controller
	  Normal  RegisteredNode           44s                node-controller  Node ha-322980-m02 event: Registered Node ha-322980-m02 in Controller
	
	
	Name:               ha-322980-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-322980-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=182cbbc99574885c654f8e32902368a71f76ddd3
	                    minikube.k8s.io/name=ha-322980
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_05T21_20_29_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 05 May 2024 21:20:28 +0000
	Taints:             node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-322980-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 05 May 2024 21:24:43 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sun, 05 May 2024 21:21:06 +0000   Sun, 05 May 2024 21:30:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sun, 05 May 2024 21:21:06 +0000   Sun, 05 May 2024 21:30:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sun, 05 May 2024 21:21:06 +0000   Sun, 05 May 2024 21:30:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sun, 05 May 2024 21:21:06 +0000   Sun, 05 May 2024 21:30:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.169
	  Hostname:    ha-322980-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a4c8db3356b24ba197e491501ddbfd49
	  System UUID:                a4c8db33-56b2-4ba1-97e4-91501ddbfd49
	  Boot ID:                    9ee2f344-9fdd-4182-a447-83dc5b12dc4a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-nnc4q       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m47s
	  kube-system                 kube-proxy-68cxr    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m37s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m47s (x3 over 9m48s)  kubelet          Node ha-322980-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m47s (x3 over 9m48s)  kubelet          Node ha-322980-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m47s (x3 over 9m48s)  kubelet          Node ha-322980-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m46s                  node-controller  Node ha-322980-m04 event: Registered Node ha-322980-m04 in Controller
	  Normal  RegisteredNode           9m45s                  node-controller  Node ha-322980-m04 event: Registered Node ha-322980-m04 in Controller
	  Normal  RegisteredNode           9m43s                  node-controller  Node ha-322980-m04 event: Registered Node ha-322980-m04 in Controller
	  Normal  NodeReady                9m9s                   kubelet          Node ha-322980-m04 status is now: NodeReady
	  Normal  RegisteredNode           50s                    node-controller  Node ha-322980-m04 event: Registered Node ha-322980-m04 in Controller
	  Normal  RegisteredNode           44s                    node-controller  Node ha-322980-m04 event: Registered Node ha-322980-m04 in Controller
	  Normal  NodeNotReady             10s                    node-controller  Node ha-322980-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +8.501831] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.064246] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066779] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.227983] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.115503] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.299594] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +5.048468] systemd-fstab-generator[777]: Ignoring "noauto" option for root device
	[  +0.072016] kauditd_printk_skb: 130 callbacks suppressed
	[May 5 21:16] systemd-fstab-generator[961]: Ignoring "noauto" option for root device
	[  +0.935027] kauditd_printk_skb: 57 callbacks suppressed
	[  +9.150561] systemd-fstab-generator[1376]: Ignoring "noauto" option for root device
	[  +0.089537] kauditd_printk_skb: 40 callbacks suppressed
	[ +11.653864] kauditd_printk_skb: 21 callbacks suppressed
	[May 5 21:18] kauditd_printk_skb: 74 callbacks suppressed
	[May 5 21:28] systemd-fstab-generator[3803]: Ignoring "noauto" option for root device
	[  +0.163899] systemd-fstab-generator[3815]: Ignoring "noauto" option for root device
	[  +0.174337] systemd-fstab-generator[3829]: Ignoring "noauto" option for root device
	[  +0.161075] systemd-fstab-generator[3841]: Ignoring "noauto" option for root device
	[  +0.301050] systemd-fstab-generator[3869]: Ignoring "noauto" option for root device
	[  +0.856611] systemd-fstab-generator[3972]: Ignoring "noauto" option for root device
	[  +4.601668] kauditd_printk_skb: 122 callbacks suppressed
	[ +12.029599] kauditd_printk_skb: 86 callbacks suppressed
	[ +11.080916] kauditd_printk_skb: 1 callbacks suppressed
	[May 5 21:29] kauditd_printk_skb: 1 callbacks suppressed
	[  +8.083309] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [366a7799ffc65796c54bbea26b2f81953df8a6b0fd8399e51c1f3cf1f374ff2a] <==
	{"level":"warn","ts":"2024-05-05T21:29:56.797621Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"8c95a24aec1a1ea5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:29:56.807832Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"8c95a24aec1a1ea5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:29:56.817434Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"8c95a24aec1a1ea5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:29:56.907884Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"8c95a24aec1a1ea5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:29:56.915391Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"8c95a24aec1a1ea5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:29:57.00786Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"8c95a24aec1a1ea5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:29:57.107911Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"8c95a24aec1a1ea5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:29:57.208202Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"8c95a24aec1a1ea5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:29:57.307869Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"8c95a24aec1a1ea5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:29:58.402875Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"8c95a24aec1a1ea5","rtt":"0s","error":"dial tcp 192.168.39.29:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:29:58.402883Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"8c95a24aec1a1ea5","rtt":"0s","error":"dial tcp 192.168.39.29:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:30:00.329526Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.29:2380/version","remote-member-id":"8c95a24aec1a1ea5","error":"Get \"https://192.168.39.29:2380/version\": dial tcp 192.168.39.29:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:30:00.329598Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"8c95a24aec1a1ea5","error":"Get \"https://192.168.39.29:2380/version\": dial tcp 192.168.39.29:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:30:03.403963Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"8c95a24aec1a1ea5","rtt":"0s","error":"dial tcp 192.168.39.29:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:30:03.40398Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"8c95a24aec1a1ea5","rtt":"0s","error":"dial tcp 192.168.39.29:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:30:04.331623Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.29:2380/version","remote-member-id":"8c95a24aec1a1ea5","error":"Get \"https://192.168.39.29:2380/version\": dial tcp 192.168.39.29:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:30:04.331672Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"8c95a24aec1a1ea5","error":"Get \"https://192.168.39.29:2380/version\": dial tcp 192.168.39.29:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:30:08.334255Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.29:2380/version","remote-member-id":"8c95a24aec1a1ea5","error":"Get \"https://192.168.39.29:2380/version\": dial tcp 192.168.39.29:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:30:08.334417Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"8c95a24aec1a1ea5","error":"Get \"https://192.168.39.29:2380/version\": dial tcp 192.168.39.29:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:30:08.404305Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"8c95a24aec1a1ea5","rtt":"0s","error":"dial tcp 192.168.39.29:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:30:08.404371Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"8c95a24aec1a1ea5","rtt":"0s","error":"dial tcp 192.168.39.29:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:30:12.337192Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.29:2380/version","remote-member-id":"8c95a24aec1a1ea5","error":"Get \"https://192.168.39.29:2380/version\": dial tcp 192.168.39.29:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:30:12.337371Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"8c95a24aec1a1ea5","error":"Get \"https://192.168.39.29:2380/version\": dial tcp 192.168.39.29:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:30:13.405073Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"8c95a24aec1a1ea5","rtt":"0s","error":"dial tcp 192.168.39.29:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:30:13.405224Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"8c95a24aec1a1ea5","rtt":"0s","error":"dial tcp 192.168.39.29:2380: connect: connection refused"}
	
	
	==> etcd [97769959b22d6e891106c90e8ae981d40cb02ac960e23bdfb007b3c50d50c923] <==
	{"level":"info","ts":"2024-05-05T21:26:54.126418Z","caller":"traceutil/trace.go:171","msg":"trace[1422100536] range","detail":"{range_begin:/registry/priorityclasses/; range_end:/registry/priorityclasses0; }","duration":"439.354424ms","start":"2024-05-05T21:26:53.687056Z","end":"2024-05-05T21:26:54.126411Z","steps":["trace[1422100536] 'agreement among raft nodes before linearized reading'  (duration: 436.623479ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-05T21:26:54.126525Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-05T21:26:53.687052Z","time spent":"439.463134ms","remote":"127.0.0.1:43734","response type":"/etcdserverpb.KV/Range","request count":0,"request size":59,"response count":0,"response size":0,"request content":"key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" limit:10000 "}
	2024/05/05 21:26:54 WARNING: [core] [Server #5] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/05/05 21:26:54 WARNING: [core] [Server #5] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/05/05 21:26:54 WARNING: [core] [Server #5] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-05-05T21:26:54.176083Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.178:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-05T21:26:54.176142Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.178:2379: use of closed network connection"}
	{"level":"info","ts":"2024-05-05T21:26:54.176235Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"dced536bf07718ca","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-05-05T21:26:54.176415Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"a1efc654ffe9f445"}
	{"level":"info","ts":"2024-05-05T21:26:54.176459Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"a1efc654ffe9f445"}
	{"level":"info","ts":"2024-05-05T21:26:54.17649Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"a1efc654ffe9f445"}
	{"level":"info","ts":"2024-05-05T21:26:54.176585Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445"}
	{"level":"info","ts":"2024-05-05T21:26:54.176654Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445"}
	{"level":"info","ts":"2024-05-05T21:26:54.176819Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445"}
	{"level":"info","ts":"2024-05-05T21:26:54.17686Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"a1efc654ffe9f445"}
	{"level":"info","ts":"2024-05-05T21:26:54.176869Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"8c95a24aec1a1ea5"}
	{"level":"info","ts":"2024-05-05T21:26:54.176883Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"8c95a24aec1a1ea5"}
	{"level":"info","ts":"2024-05-05T21:26:54.176901Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"8c95a24aec1a1ea5"}
	{"level":"info","ts":"2024-05-05T21:26:54.177003Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"dced536bf07718ca","remote-peer-id":"8c95a24aec1a1ea5"}
	{"level":"info","ts":"2024-05-05T21:26:54.177072Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"dced536bf07718ca","remote-peer-id":"8c95a24aec1a1ea5"}
	{"level":"info","ts":"2024-05-05T21:26:54.177105Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"dced536bf07718ca","remote-peer-id":"8c95a24aec1a1ea5"}
	{"level":"info","ts":"2024-05-05T21:26:54.177115Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"8c95a24aec1a1ea5"}
	{"level":"info","ts":"2024-05-05T21:26:54.180402Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.178:2380"}
	{"level":"info","ts":"2024-05-05T21:26:54.180578Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.178:2380"}
	{"level":"info","ts":"2024-05-05T21:26:54.180616Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-322980","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.178:2380"],"advertise-client-urls":["https://192.168.39.178:2379"]}
	
	
	==> kernel <==
	 21:30:15 up 14 min,  0 users,  load average: 0.94, 0.73, 0.41
	Linux ha-322980 5.10.207 #1 SMP Tue Apr 30 22:38:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2e9a9bfcaf10e05ee630f2bf1bb282bdfa63fde7a77a450390e8b38f43e429d4] <==
	I0505 21:28:33.366089       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0505 21:28:33.366248       1 main.go:107] hostIP = 192.168.39.178
	podIP = 192.168.39.178
	I0505 21:28:33.366430       1 main.go:116] setting mtu 1500 for CNI 
	I0505 21:28:33.366469       1 main.go:146] kindnetd IP family: "ipv4"
	I0505 21:28:33.366564       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0505 21:28:36.455538       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0505 21:28:39.527033       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0505 21:28:50.536997       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0505 21:28:55.189449       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 192.168.122.78:51834->10.96.0.1:443: read: connection reset by peer
	I0505 21:28:58.192741       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kindnet [d8e5582057ffa7695636c3c49a29af71b47c3e0e49d6d4f28aed3cb84503b54e] <==
	I0505 21:29:44.465833       1 main.go:250] Node ha-322980-m03 has CIDR [10.244.2.0/24] 
	I0505 21:29:44.466009       1 main.go:223] Handling node with IPs: map[192.168.39.169:{}]
	I0505 21:29:44.466018       1 main.go:250] Node ha-322980-m04 has CIDR [10.244.3.0/24] 
	I0505 21:29:54.483489       1 main.go:223] Handling node with IPs: map[192.168.39.178:{}]
	I0505 21:29:54.483510       1 main.go:227] handling current node
	I0505 21:29:54.483519       1 main.go:223] Handling node with IPs: map[192.168.39.228:{}]
	I0505 21:29:54.483524       1 main.go:250] Node ha-322980-m02 has CIDR [10.244.1.0/24] 
	I0505 21:29:54.483635       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0505 21:29:54.483676       1 main.go:250] Node ha-322980-m03 has CIDR [10.244.2.0/24] 
	I0505 21:29:54.483744       1 main.go:223] Handling node with IPs: map[192.168.39.169:{}]
	I0505 21:29:54.483838       1 main.go:250] Node ha-322980-m04 has CIDR [10.244.3.0/24] 
	I0505 21:30:04.500852       1 main.go:223] Handling node with IPs: map[192.168.39.178:{}]
	I0505 21:30:04.500874       1 main.go:227] handling current node
	I0505 21:30:04.500883       1 main.go:223] Handling node with IPs: map[192.168.39.228:{}]
	I0505 21:30:04.500888       1 main.go:250] Node ha-322980-m02 has CIDR [10.244.1.0/24] 
	I0505 21:30:04.500984       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0505 21:30:04.500989       1 main.go:250] Node ha-322980-m03 has CIDR [10.244.2.0/24] 
	I0505 21:30:04.501029       1 main.go:223] Handling node with IPs: map[192.168.39.169:{}]
	I0505 21:30:04.501033       1 main.go:250] Node ha-322980-m04 has CIDR [10.244.3.0/24] 
	I0505 21:30:14.519327       1 main.go:223] Handling node with IPs: map[192.168.39.178:{}]
	I0505 21:30:14.519375       1 main.go:227] handling current node
	I0505 21:30:14.519393       1 main.go:223] Handling node with IPs: map[192.168.39.228:{}]
	I0505 21:30:14.519398       1 main.go:250] Node ha-322980-m02 has CIDR [10.244.1.0/24] 
	I0505 21:30:14.519561       1 main.go:223] Handling node with IPs: map[192.168.39.169:{}]
	I0505 21:30:14.519567       1 main.go:250] Node ha-322980-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [a6a90eca6999f3420eb8cc58e5af0b595de2c1bbf36f04f08d7edea98e8cef1d] <==
	I0505 21:29:14.675369       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0505 21:29:14.675740       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0505 21:29:14.747127       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0505 21:29:14.748867       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0505 21:29:14.749586       1 shared_informer.go:320] Caches are synced for configmaps
	I0505 21:29:14.753732       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0505 21:29:14.753841       1 aggregator.go:165] initial CRD sync complete...
	I0505 21:29:14.753861       1 autoregister_controller.go:141] Starting autoregister controller
	I0505 21:29:14.753867       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0505 21:29:14.753872       1 cache.go:39] Caches are synced for autoregister controller
	I0505 21:29:14.756745       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	W0505 21:29:14.764451       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.228 192.168.39.29]
	I0505 21:29:14.776493       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0505 21:29:14.778408       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0505 21:29:14.778419       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0505 21:29:14.810622       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0505 21:29:14.811940       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0505 21:29:14.811990       1 policy_source.go:224] refreshing policies
	I0505 21:29:14.844983       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0505 21:29:14.866878       1 controller.go:615] quota admission added evaluator for: endpoints
	I0505 21:29:14.882093       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0505 21:29:14.889341       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0505 21:29:15.655887       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0505 21:29:16.108522       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.178 192.168.39.228 192.168.39.29]
	W0505 21:29:26.114517       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.178 192.168.39.228]
	
	
	==> kube-apiserver [d1ff1f42ee456adbb1ab902c56155f156eb2f298a79dc46f7d316794adc69f37] <==
	I0505 21:28:33.077501       1 options.go:221] external host was not specified, using 192.168.39.178
	I0505 21:28:33.079214       1 server.go:148] Version: v1.30.0
	I0505 21:28:33.079278       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0505 21:28:34.149386       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0505 21:28:34.175562       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0505 21:28:34.176231       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0505 21:28:34.181000       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0505 21:28:34.181265       1 instance.go:299] Using reconciler: lease
	W0505 21:28:54.146228       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0505 21:28:54.146452       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0505 21:28:54.181961       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [06be80792a085f9dea48d4219f306149664769dd55a7c6bc24d984254df5fc7d] <==
	I0505 21:28:34.536857       1 serving.go:380] Generated self-signed cert in-memory
	I0505 21:28:34.963263       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0505 21:28:34.963366       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0505 21:28:34.965468       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0505 21:28:34.965623       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0505 21:28:34.966267       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0505 21:28:34.966207       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	E0505 21:28:55.190151       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.178:8443/healthz\": dial tcp 192.168.39.178:8443: connect: connection refused"
	
	
	==> kube-controller-manager [b48ee84cd3ceb82bcfda671b85eb4b1b2793a18718f5c445445feaf19173b3d9] <==
	I0505 21:29:31.474194       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0505 21:29:31.491709       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0505 21:29:31.501315       1 shared_informer.go:320] Caches are synced for deployment
	I0505 21:29:31.504733       1 shared_informer.go:320] Caches are synced for crt configmap
	I0505 21:29:31.597296       1 shared_informer.go:320] Caches are synced for stateful set
	I0505 21:29:31.628093       1 shared_informer.go:320] Caches are synced for resource quota
	I0505 21:29:31.628297       1 shared_informer.go:320] Caches are synced for resource quota
	I0505 21:29:31.652424       1 shared_informer.go:320] Caches are synced for disruption
	I0505 21:29:32.045082       1 shared_informer.go:320] Caches are synced for garbage collector
	I0505 21:29:32.086936       1 shared_informer.go:320] Caches are synced for garbage collector
	I0505 21:29:32.087026       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0505 21:29:38.137225       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-zwjpl EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-zwjpl\": the object has been modified; please apply your changes to the latest version and try again"
	I0505 21:29:38.138469       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"bd0b72d6-9faf-4581-8043-a8dc8030d953", APIVersion:"v1", ResourceVersion:"239", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-zwjpl EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-zwjpl": the object has been modified; please apply your changes to the latest version and try again
	I0505 21:29:38.217559       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="124.82676ms"
	I0505 21:29:38.217748       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="123.485µs"
	I0505 21:29:38.240587       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="22.093008ms"
	I0505 21:29:38.241056       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="136.282µs"
	I0505 21:30:05.900371       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.249589ms"
	I0505 21:30:05.900889       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="165.064µs"
	I0505 21:30:09.312496       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.604874ms"
	I0505 21:30:09.395444       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="82.755993ms"
	I0505 21:30:09.418104       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="21.069539ms"
	I0505 21:30:09.418323       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="97.111µs"
	E0505 21:30:12.614439       1 garbagecollector.go:399] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:"coordination.k8s.io/v1", Kind:"Lease", Name:"ha-322980-m03", UID:"a59f7069-368c-4661-9d67-419065445657", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:"kube-node-lease"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:1}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerW
ait:atomic.Int32{_:atomic.noCopy{}, v:0}}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Node", Name:"ha-322980-m03", UID:"e8fac6a5-bfb4-4079-a998-9ebdebf0cddb", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: leases.coordination.k8s.io "ha-322980-m03" not found
	E0505 21:30:12.625053       1 garbagecollector.go:399] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:"storage.k8s.io/v1", Kind:"CSINode", Name:"ha-322980-m03", UID:"ed14c339-c112-4930-a406-25d37f2b1524", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:""}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:1}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_
:atomic.noCopy{}, v:0}}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Node", Name:"ha-322980-m03", UID:"e8fac6a5-bfb4-4079-a998-9ebdebf0cddb", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: csinodes.storage.k8s.io "ha-322980-m03" not found
	
	
	==> kube-proxy [4da23c67204614119fd12bb11bb2dcc384609fe399a32f06e2ca61ff52a8438c] <==
	E0505 21:25:37.638259       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-322980&resourceVersion=2067": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:25:37.638443       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2042": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:25:37.638505       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2042": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:25:40.774308       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2089": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:25:40.774433       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2089": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:25:43.846169       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2042": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:25:43.846281       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2042": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:25:43.846378       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-322980&resourceVersion=2067": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:25:43.846415       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-322980&resourceVersion=2067": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:25:49.288357       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2089": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:25:49.288477       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2089": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:25:52.359464       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-322980&resourceVersion=2067": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:25:52.359604       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-322980&resourceVersion=2067": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:25:52.359850       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2042": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:25:52.360135       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2042": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:26:01.575624       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2089": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:26:01.575831       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2089": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:26:07.718256       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-322980&resourceVersion=2067": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:26:07.718308       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-322980&resourceVersion=2067": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:26:16.935169       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2042": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:26:16.935384       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2042": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:26:26.151522       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2089": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:26:26.151611       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2089": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:26:50.729138       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2042": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:26:50.729278       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2042": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [852f56752c6434b6c374dd0257366bf739e210aee07f80277d63776fd528299b] <==
	E0505 21:28:56.678399       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-322980\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0505 21:29:15.111476       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-322980\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0505 21:29:15.111631       1 server.go:1032] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	I0505 21:29:15.158669       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0505 21:29:15.159039       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0505 21:29:15.159098       1 server_linux.go:165] "Using iptables Proxier"
	I0505 21:29:15.162311       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0505 21:29:15.162696       1 server.go:872] "Version info" version="v1.30.0"
	I0505 21:29:15.162874       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0505 21:29:15.164823       1 config.go:192] "Starting service config controller"
	I0505 21:29:15.164887       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0505 21:29:15.164934       1 config.go:101] "Starting endpoint slice config controller"
	I0505 21:29:15.164953       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0505 21:29:15.165937       1 config.go:319] "Starting node config controller"
	I0505 21:29:15.165973       1 shared_informer.go:313] Waiting for caches to sync for node config
	W0505 21:29:18.185437       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-322980&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:29:18.185597       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-322980&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:29:18.185705       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:29:18.185878       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:29:18.185973       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:29:18.186035       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:29:18.186283       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0505 21:29:19.368732       1 shared_informer.go:320] Caches are synced for service config
	I0505 21:29:19.565743       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0505 21:29:19.566403       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d73ef383ce1ab83dd2b2ada78891a4aa3835de2b4805157ac3e15584d0b4b29b] <==
	W0505 21:26:48.759656       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0505 21:26:48.759854       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0505 21:26:49.007969       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0505 21:26:49.008025       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0505 21:26:49.192688       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0505 21:26:49.192899       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0505 21:26:49.499913       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0505 21:26:49.500129       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0505 21:26:49.690540       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0505 21:26:49.690638       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0505 21:26:49.803920       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0505 21:26:49.804019       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0505 21:26:49.837414       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0505 21:26:49.837447       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0505 21:26:50.166566       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0505 21:26:50.166673       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0505 21:26:50.189955       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0505 21:26:50.190063       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0505 21:26:50.388221       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0505 21:26:50.388612       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0505 21:26:51.048541       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0505 21:26:51.048638       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0505 21:26:51.142262       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0505 21:26:51.142364       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0505 21:26:54.094710       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d864b4fda0bb945209c47a2404e8a012f5dac000a178c964de8c1e8cc8cb9a9f] <==
	W0505 21:29:04.997573       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.178:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.178:8443: connect: connection refused
	E0505 21:29:04.997652       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.178:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.178:8443: connect: connection refused
	W0505 21:29:05.128425       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.178:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.178:8443: connect: connection refused
	E0505 21:29:05.128575       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.178:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.178:8443: connect: connection refused
	W0505 21:29:05.147199       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.178:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.178:8443: connect: connection refused
	E0505 21:29:05.147341       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.178:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.178:8443: connect: connection refused
	W0505 21:29:05.216455       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.178:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.178:8443: connect: connection refused
	E0505 21:29:05.216557       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.178:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.178:8443: connect: connection refused
	W0505 21:29:05.321585       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.178:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.178:8443: connect: connection refused
	E0505 21:29:05.321664       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.178:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.178:8443: connect: connection refused
	W0505 21:29:09.702065       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.178:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.178:8443: connect: connection refused
	E0505 21:29:09.702200       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.178:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.178:8443: connect: connection refused
	W0505 21:29:10.927665       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.178:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.178:8443: connect: connection refused
	E0505 21:29:10.927720       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.178:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.178:8443: connect: connection refused
	W0505 21:29:12.398235       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.178:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.178:8443: connect: connection refused
	E0505 21:29:12.398339       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.178:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.178:8443: connect: connection refused
	W0505 21:29:14.689401       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0505 21:29:14.691913       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0505 21:29:14.723694       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0505 21:29:14.723815       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0505 21:29:14.723924       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0505 21:29:14.723961       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0505 21:29:14.727090       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0505 21:29:14.727163       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0505 21:29:39.409575       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 05 21:29:15 ha-322980 kubelet[1385]: E0505 21:29:15.110143    1385 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-322980\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-322980?resourceVersion=0&timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	May 05 21:29:15 ha-322980 kubelet[1385]: I0505 21:29:15.110160    1385 status_manager.go:853] "Failed to get status for pod" podUID="578ccf60a9d00c195d5069c63fb0b319" pod="kube-system/kube-controller-manager-ha-322980" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-322980\": dial tcp 192.168.39.254:8443: connect: no route to host"
	May 05 21:29:18 ha-322980 kubelet[1385]: E0505 21:29:18.182266    1385 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-322980\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-322980?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	May 05 21:29:18 ha-322980 kubelet[1385]: E0505 21:29:18.183048    1385 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-322980?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host" interval="7s"
	May 05 21:29:18 ha-322980 kubelet[1385]: I0505 21:29:18.183205    1385 status_manager.go:853] "Failed to get status for pod" podUID="d0b6492d-c0f5-45dd-8482-c447b81daa66" pod="kube-system/kube-proxy-8xdzd" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8xdzd\": dial tcp 192.168.39.254:8443: connect: no route to host"
	May 05 21:29:18 ha-322980 kubelet[1385]: I0505 21:29:18.378035    1385 scope.go:117] "RemoveContainer" containerID="06be80792a085f9dea48d4219f306149664769dd55a7c6bc24d984254df5fc7d"
	May 05 21:29:27 ha-322980 kubelet[1385]: I0505 21:29:27.377745    1385 scope.go:117] "RemoveContainer" containerID="0c012cc95d188bdded0cf101970a7bcf34d1c2860b11847a187db706e95a0138"
	May 05 21:29:27 ha-322980 kubelet[1385]: E0505 21:29:27.378418    1385 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(bc212ac3-7499-4edc-b5a5-622b0bd4a891)\"" pod="kube-system/storage-provisioner" podUID="bc212ac3-7499-4edc-b5a5-622b0bd4a891"
	May 05 21:29:29 ha-322980 kubelet[1385]: I0505 21:29:29.377956    1385 scope.go:117] "RemoveContainer" containerID="2e9a9bfcaf10e05ee630f2bf1bb282bdfa63fde7a77a450390e8b38f43e429d4"
	May 05 21:29:29 ha-322980 kubelet[1385]: E0505 21:29:29.378401    1385 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-lwtnx_kube-system(4033535e-69f1-426c-bb17-831fad6336d5)\"" pod="kube-system/kindnet-lwtnx" podUID="4033535e-69f1-426c-bb17-831fad6336d5"
	May 05 21:29:40 ha-322980 kubelet[1385]: I0505 21:29:40.378180    1385 scope.go:117] "RemoveContainer" containerID="0c012cc95d188bdded0cf101970a7bcf34d1c2860b11847a187db706e95a0138"
	May 05 21:29:40 ha-322980 kubelet[1385]: E0505 21:29:40.380981    1385 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(bc212ac3-7499-4edc-b5a5-622b0bd4a891)\"" pod="kube-system/storage-provisioner" podUID="bc212ac3-7499-4edc-b5a5-622b0bd4a891"
	May 05 21:29:43 ha-322980 kubelet[1385]: I0505 21:29:43.378120    1385 scope.go:117] "RemoveContainer" containerID="2e9a9bfcaf10e05ee630f2bf1bb282bdfa63fde7a77a450390e8b38f43e429d4"
	May 05 21:29:51 ha-322980 kubelet[1385]: I0505 21:29:51.378189    1385 scope.go:117] "RemoveContainer" containerID="0c012cc95d188bdded0cf101970a7bcf34d1c2860b11847a187db706e95a0138"
	May 05 21:29:51 ha-322980 kubelet[1385]: E0505 21:29:51.378643    1385 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(bc212ac3-7499-4edc-b5a5-622b0bd4a891)\"" pod="kube-system/storage-provisioner" podUID="bc212ac3-7499-4edc-b5a5-622b0bd4a891"
	May 05 21:29:54 ha-322980 kubelet[1385]: I0505 21:29:54.378500    1385 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-322980" podUID="8743dbcc-49f9-46e8-8088-cd5020429c08"
	May 05 21:29:54 ha-322980 kubelet[1385]: I0505 21:29:54.400896    1385 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-322980"
	May 05 21:29:55 ha-322980 kubelet[1385]: I0505 21:29:55.028652    1385 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-322980" podUID="8743dbcc-49f9-46e8-8088-cd5020429c08"
	May 05 21:30:04 ha-322980 kubelet[1385]: I0505 21:30:04.378605    1385 scope.go:117] "RemoveContainer" containerID="0c012cc95d188bdded0cf101970a7bcf34d1c2860b11847a187db706e95a0138"
	May 05 21:30:05 ha-322980 kubelet[1385]: I0505 21:30:05.101417    1385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-322980" podStartSLOduration=11.101388283 podStartE2EDuration="11.101388283s" podCreationTimestamp="2024-05-05 21:29:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-05 21:30:04.427350229 +0000 UTC m=+830.199064321" watchObservedRunningTime="2024-05-05 21:30:05.101388283 +0000 UTC m=+830.873102377"
	May 05 21:30:14 ha-322980 kubelet[1385]: E0505 21:30:14.407283    1385 iptables.go:577] "Could not set up iptables canary" err=<
	May 05 21:30:14 ha-322980 kubelet[1385]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 05 21:30:14 ha-322980 kubelet[1385]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 05 21:30:14 ha-322980 kubelet[1385]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 05 21:30:14 ha-322980 kubelet[1385]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0505 21:30:14.096309   37705 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18602-11466/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-322980 -n ha-322980
helpers_test.go:261: (dbg) Run:  kubectl --context ha-322980 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-2klvr
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-322980 describe pod busybox-fc5497c4f-2klvr
helpers_test.go:282: (dbg) kubectl --context ha-322980 describe pod busybox-fc5497c4f-2klvr:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-2klvr
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2n8qw (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-2n8qw:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age   From               Message
	  ----     ------            ----  ----               -------
	  Warning  FailedScheduling  7s    default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  7s    default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (7.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (173.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 stop -v=7 --alsologtostderr
E0505 21:31:51.948013   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/client.crt: no such file or directory
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-322980 stop -v=7 --alsologtostderr: exit status 82 (2m2.324993178s)

                                                
                                                
-- stdout --
	* Stopping node "ha-322980-m04"  ...
	* Stopping node "ha-322980-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 21:30:16.900151   37842 out.go:291] Setting OutFile to fd 1 ...
	I0505 21:30:16.900274   37842 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 21:30:16.900289   37842 out.go:304] Setting ErrFile to fd 2...
	I0505 21:30:16.900294   37842 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 21:30:16.900498   37842 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18602-11466/.minikube/bin
	I0505 21:30:16.900771   37842 out.go:298] Setting JSON to false
	I0505 21:30:16.900870   37842 mustload.go:65] Loading cluster: ha-322980
	I0505 21:30:16.901312   37842 config.go:182] Loaded profile config "ha-322980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 21:30:16.901433   37842 profile.go:143] Saving config to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/config.json ...
	I0505 21:30:16.901678   37842 mustload.go:65] Loading cluster: ha-322980
	I0505 21:30:16.901833   37842 config.go:182] Loaded profile config "ha-322980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 21:30:16.901871   37842 stop.go:39] StopHost: ha-322980-m04
	I0505 21:30:16.902348   37842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:30:16.902417   37842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:30:16.918754   37842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36773
	I0505 21:30:16.919274   37842 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:30:16.919874   37842 main.go:141] libmachine: Using API Version  1
	I0505 21:30:16.919902   37842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:30:16.920320   37842 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:30:16.922759   37842 out.go:177] * Stopping node "ha-322980-m04"  ...
	I0505 21:30:16.924326   37842 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0505 21:30:16.924362   37842 main.go:141] libmachine: (ha-322980-m04) Calling .DriverName
	I0505 21:30:16.924632   37842 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0505 21:30:16.924663   37842 main.go:141] libmachine: (ha-322980-m04) Calling .GetSSHHostname
	I0505 21:30:16.926161   37842 retry.go:31] will retry after 354.396073ms: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: host is not running
	I0505 21:30:17.280655   37842 main.go:141] libmachine: (ha-322980-m04) Calling .GetSSHHostname
	I0505 21:30:17.282279   37842 retry.go:31] will retry after 193.008599ms: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: host is not running
	I0505 21:30:17.475699   37842 main.go:141] libmachine: (ha-322980-m04) Calling .GetSSHHostname
	I0505 21:30:17.477553   37842 retry.go:31] will retry after 628.446969ms: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: host is not running
	I0505 21:30:18.106394   37842 main.go:141] libmachine: (ha-322980-m04) Calling .GetSSHHostname
	I0505 21:30:18.108087   37842 retry.go:31] will retry after 620.505431ms: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: host is not running
	I0505 21:30:18.729315   37842 main.go:141] libmachine: (ha-322980-m04) Calling .GetSSHHostname
	W0505 21:30:18.731051   37842 stop.go:55] failed to complete vm config backup (will continue): create dir: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: host is not running
	I0505 21:30:18.731092   37842 main.go:141] libmachine: Stopping "ha-322980-m04"...
	I0505 21:30:18.731100   37842 main.go:141] libmachine: (ha-322980-m04) Calling .GetState
	I0505 21:30:18.732403   37842 stop.go:66] stop err: Machine "ha-322980-m04" is already stopped.
	I0505 21:30:18.732429   37842 stop.go:69] host is already stopped
	I0505 21:30:18.732445   37842 stop.go:39] StopHost: ha-322980-m02
	I0505 21:30:18.732850   37842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:30:18.732924   37842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:30:18.747651   37842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40153
	I0505 21:30:18.748114   37842 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:30:18.748617   37842 main.go:141] libmachine: Using API Version  1
	I0505 21:30:18.748642   37842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:30:18.748936   37842 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:30:18.751111   37842 out.go:177] * Stopping node "ha-322980-m02"  ...
	I0505 21:30:18.752451   37842 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0505 21:30:18.752483   37842 main.go:141] libmachine: (ha-322980-m02) Calling .DriverName
	I0505 21:30:18.752738   37842 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0505 21:30:18.752760   37842 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHHostname
	I0505 21:30:18.755350   37842 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:30:18.755758   37842 main.go:141] libmachine: (ha-322980-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:59:b4", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:28:40 +0000 UTC Type:0 Mac:52:54:00:91:59:b4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-322980-m02 Clientid:01:52:54:00:91:59:b4}
	I0505 21:30:18.755788   37842 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:30:18.755910   37842 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHPort
	I0505 21:30:18.756092   37842 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHKeyPath
	I0505 21:30:18.756256   37842 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHUsername
	I0505 21:30:18.756396   37842 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m02/id_rsa Username:docker}
	I0505 21:30:18.840556   37842 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0505 21:30:18.900005   37842 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0505 21:30:18.959920   37842 main.go:141] libmachine: Stopping "ha-322980-m02"...
	I0505 21:30:18.959949   37842 main.go:141] libmachine: (ha-322980-m02) Calling .GetState
	I0505 21:30:18.961722   37842 main.go:141] libmachine: (ha-322980-m02) Calling .Stop
	I0505 21:30:18.965459   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 0/120
	I0505 21:30:19.966923   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 1/120
	I0505 21:30:20.968335   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 2/120
	I0505 21:30:21.969741   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 3/120
	I0505 21:30:22.971319   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 4/120
	I0505 21:30:23.973012   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 5/120
	I0505 21:30:24.975000   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 6/120
	I0505 21:30:25.976874   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 7/120
	I0505 21:30:26.978918   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 8/120
	I0505 21:30:27.980365   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 9/120
	I0505 21:30:28.982151   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 10/120
	I0505 21:30:29.983453   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 11/120
	I0505 21:30:30.984923   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 12/120
	I0505 21:30:31.986427   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 13/120
	I0505 21:30:32.987768   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 14/120
	I0505 21:30:33.989640   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 15/120
	I0505 21:30:34.991084   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 16/120
	I0505 21:30:35.992783   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 17/120
	I0505 21:30:36.994239   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 18/120
	I0505 21:30:37.995803   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 19/120
	I0505 21:30:38.997983   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 20/120
	I0505 21:30:39.999397   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 21/120
	I0505 21:30:41.000808   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 22/120
	I0505 21:30:42.002217   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 23/120
	I0505 21:30:43.003668   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 24/120
	I0505 21:30:44.005611   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 25/120
	I0505 21:30:45.007180   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 26/120
	I0505 21:30:46.008758   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 27/120
	I0505 21:30:47.010221   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 28/120
	I0505 21:30:48.011523   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 29/120
	I0505 21:30:49.013122   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 30/120
	I0505 21:30:50.014678   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 31/120
	I0505 21:30:51.016081   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 32/120
	I0505 21:30:52.017327   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 33/120
	I0505 21:30:53.018598   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 34/120
	I0505 21:30:54.020545   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 35/120
	I0505 21:30:55.022163   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 36/120
	I0505 21:30:56.023518   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 37/120
	I0505 21:30:57.025189   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 38/120
	I0505 21:30:58.026630   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 39/120
	I0505 21:30:59.028773   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 40/120
	I0505 21:31:00.030604   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 41/120
	I0505 21:31:01.032411   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 42/120
	I0505 21:31:02.034181   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 43/120
	I0505 21:31:03.036433   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 44/120
	I0505 21:31:04.038216   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 45/120
	I0505 21:31:05.039834   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 46/120
	I0505 21:31:06.041503   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 47/120
	I0505 21:31:07.043235   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 48/120
	I0505 21:31:08.044658   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 49/120
	I0505 21:31:09.046787   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 50/120
	I0505 21:31:10.048321   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 51/120
	I0505 21:31:11.049839   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 52/120
	I0505 21:31:12.051185   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 53/120
	I0505 21:31:13.052790   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 54/120
	I0505 21:31:14.054920   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 55/120
	I0505 21:31:15.056503   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 56/120
	I0505 21:31:16.057852   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 57/120
	I0505 21:31:17.059184   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 58/120
	I0505 21:31:18.060534   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 59/120
	I0505 21:31:19.062606   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 60/120
	I0505 21:31:20.063866   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 61/120
	I0505 21:31:21.065532   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 62/120
	I0505 21:31:22.067143   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 63/120
	I0505 21:31:23.068691   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 64/120
	I0505 21:31:24.070440   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 65/120
	I0505 21:31:25.072048   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 66/120
	I0505 21:31:26.074018   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 67/120
	I0505 21:31:27.075468   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 68/120
	I0505 21:31:28.077244   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 69/120
	I0505 21:31:29.079205   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 70/120
	I0505 21:31:30.080543   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 71/120
	I0505 21:31:31.082701   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 72/120
	I0505 21:31:32.084053   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 73/120
	I0505 21:31:33.085534   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 74/120
	I0505 21:31:34.087543   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 75/120
	I0505 21:31:35.088951   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 76/120
	I0505 21:31:36.090490   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 77/120
	I0505 21:31:37.092105   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 78/120
	I0505 21:31:38.093470   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 79/120
	I0505 21:31:39.095832   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 80/120
	I0505 21:31:40.097397   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 81/120
	I0505 21:31:41.098759   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 82/120
	I0505 21:31:42.100178   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 83/120
	I0505 21:31:43.101604   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 84/120
	I0505 21:31:44.103291   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 85/120
	I0505 21:31:45.104969   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 86/120
	I0505 21:31:46.106451   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 87/120
	I0505 21:31:47.107849   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 88/120
	I0505 21:31:48.109127   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 89/120
	I0505 21:31:49.110637   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 90/120
	I0505 21:31:50.112389   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 91/120
	I0505 21:31:51.114568   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 92/120
	I0505 21:31:52.115882   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 93/120
	I0505 21:31:53.117980   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 94/120
	I0505 21:31:54.119935   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 95/120
	I0505 21:31:55.122267   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 96/120
	I0505 21:31:56.123578   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 97/120
	I0505 21:31:57.125158   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 98/120
	I0505 21:31:58.126596   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 99/120
	I0505 21:31:59.128139   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 100/120
	I0505 21:32:00.129419   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 101/120
	I0505 21:32:01.130990   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 102/120
	I0505 21:32:02.132404   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 103/120
	I0505 21:32:03.133715   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 104/120
	I0505 21:32:04.135506   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 105/120
	I0505 21:32:05.137047   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 106/120
	I0505 21:32:06.138623   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 107/120
	I0505 21:32:07.140026   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 108/120
	I0505 21:32:08.141432   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 109/120
	I0505 21:32:09.143541   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 110/120
	I0505 21:32:10.144889   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 111/120
	I0505 21:32:11.146429   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 112/120
	I0505 21:32:12.147697   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 113/120
	I0505 21:32:13.150133   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 114/120
	I0505 21:32:14.152065   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 115/120
	I0505 21:32:15.153607   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 116/120
	I0505 21:32:16.155178   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 117/120
	I0505 21:32:17.157462   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 118/120
	I0505 21:32:18.158845   37842 main.go:141] libmachine: (ha-322980-m02) Waiting for machine to stop 119/120
	I0505 21:32:19.159586   37842 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0505 21:32:19.159642   37842 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0505 21:32:19.162264   37842 out.go:177] 
	W0505 21:32:19.163848   37842 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0505 21:32:19.163872   37842 out.go:239] * 
	* 
	W0505 21:32:19.166030   37842 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0505 21:32:19.168020   37842 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-322980 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-322980 status -v=7 --alsologtostderr: exit status 7 (34.060613421s)

                                                
                                                
-- stdout --
	ha-322980
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-322980-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-322980-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 21:32:19.226386   38267 out.go:291] Setting OutFile to fd 1 ...
	I0505 21:32:19.226525   38267 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 21:32:19.226537   38267 out.go:304] Setting ErrFile to fd 2...
	I0505 21:32:19.226544   38267 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 21:32:19.226786   38267 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18602-11466/.minikube/bin
	I0505 21:32:19.226958   38267 out.go:298] Setting JSON to false
	I0505 21:32:19.226982   38267 mustload.go:65] Loading cluster: ha-322980
	I0505 21:32:19.227105   38267 notify.go:220] Checking for updates...
	I0505 21:32:19.227370   38267 config.go:182] Loaded profile config "ha-322980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 21:32:19.227382   38267 status.go:255] checking status of ha-322980 ...
	I0505 21:32:19.227829   38267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:32:19.227895   38267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:32:19.244713   38267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36403
	I0505 21:32:19.245209   38267 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:32:19.245814   38267 main.go:141] libmachine: Using API Version  1
	I0505 21:32:19.245833   38267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:32:19.246209   38267 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:32:19.246448   38267 main.go:141] libmachine: (ha-322980) Calling .GetState
	I0505 21:32:19.248287   38267 status.go:330] ha-322980 host status = "Running" (err=<nil>)
	I0505 21:32:19.248308   38267 host.go:66] Checking if "ha-322980" exists ...
	I0505 21:32:19.248653   38267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:32:19.248695   38267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:32:19.264080   38267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39851
	I0505 21:32:19.264490   38267 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:32:19.264963   38267 main.go:141] libmachine: Using API Version  1
	I0505 21:32:19.264981   38267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:32:19.265261   38267 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:32:19.265518   38267 main.go:141] libmachine: (ha-322980) Calling .GetIP
	I0505 21:32:19.268507   38267 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:32:19.268918   38267 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:32:19.268954   38267 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:32:19.269087   38267 host.go:66] Checking if "ha-322980" exists ...
	I0505 21:32:19.269421   38267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:32:19.269466   38267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:32:19.284431   38267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46513
	I0505 21:32:19.284789   38267 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:32:19.285226   38267 main.go:141] libmachine: Using API Version  1
	I0505 21:32:19.285254   38267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:32:19.285582   38267 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:32:19.285741   38267 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:32:19.285903   38267 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0505 21:32:19.285932   38267 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:32:19.288433   38267 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:32:19.288790   38267 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:32:19.288812   38267 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:32:19.288978   38267 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:32:19.289176   38267 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:32:19.289317   38267 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:32:19.289448   38267 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa Username:docker}
	I0505 21:32:19.378043   38267 ssh_runner.go:195] Run: systemctl --version
	I0505 21:32:19.387587   38267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 21:32:19.405589   38267 kubeconfig.go:125] found "ha-322980" server: "https://192.168.39.254:8443"
	I0505 21:32:19.405624   38267 api_server.go:166] Checking apiserver status ...
	I0505 21:32:19.405655   38267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 21:32:19.422970   38267 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/6072/cgroup
	W0505 21:32:19.433585   38267 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/6072/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0505 21:32:19.433628   38267 ssh_runner.go:195] Run: ls
	I0505 21:32:19.440570   38267 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0505 21:32:22.491823   38267 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": dial tcp 192.168.39.254:8443: connect: no route to host
	I0505 21:32:22.491884   38267 retry.go:31] will retry after 235.391787ms: state is "Stopped"
	I0505 21:32:22.728376   38267 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0505 21:32:25.563810   38267 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": dial tcp 192.168.39.254:8443: connect: no route to host
	I0505 21:32:25.563856   38267 retry.go:31] will retry after 382.81404ms: state is "Stopped"
	I0505 21:32:25.947463   38267 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0505 21:32:28.635805   38267 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": dial tcp 192.168.39.254:8443: connect: no route to host
	I0505 21:32:28.635858   38267 retry.go:31] will retry after 403.833629ms: state is "Stopped"
	I0505 21:32:29.040505   38267 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0505 21:32:31.707818   38267 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": dial tcp 192.168.39.254:8443: connect: no route to host
	I0505 21:32:31.707870   38267 retry.go:31] will retry after 523.98103ms: state is "Stopped"
	I0505 21:32:32.232647   38267 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0505 21:32:34.779872   38267 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": dial tcp 192.168.39.254:8443: connect: no route to host
	I0505 21:32:34.779922   38267 status.go:422] ha-322980 apiserver status = Running (err=<nil>)
	I0505 21:32:34.779930   38267 status.go:257] ha-322980 status: &{Name:ha-322980 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0505 21:32:34.779960   38267 status.go:255] checking status of ha-322980-m02 ...
	I0505 21:32:34.780371   38267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:32:34.780487   38267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:32:34.794759   38267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32977
	I0505 21:32:34.795250   38267 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:32:34.795801   38267 main.go:141] libmachine: Using API Version  1
	I0505 21:32:34.795824   38267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:32:34.796132   38267 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:32:34.796330   38267 main.go:141] libmachine: (ha-322980-m02) Calling .GetState
	I0505 21:32:34.798076   38267 status.go:330] ha-322980-m02 host status = "Running" (err=<nil>)
	I0505 21:32:34.798092   38267 host.go:66] Checking if "ha-322980-m02" exists ...
	I0505 21:32:34.798507   38267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:32:34.798554   38267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:32:34.813013   38267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41845
	I0505 21:32:34.813477   38267 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:32:34.813913   38267 main.go:141] libmachine: Using API Version  1
	I0505 21:32:34.813942   38267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:32:34.814301   38267 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:32:34.814472   38267 main.go:141] libmachine: (ha-322980-m02) Calling .GetIP
	I0505 21:32:34.817382   38267 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:32:34.817785   38267 main.go:141] libmachine: (ha-322980-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:59:b4", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:28:40 +0000 UTC Type:0 Mac:52:54:00:91:59:b4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-322980-m02 Clientid:01:52:54:00:91:59:b4}
	I0505 21:32:34.817807   38267 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:32:34.817936   38267 host.go:66] Checking if "ha-322980-m02" exists ...
	I0505 21:32:34.818329   38267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:32:34.818445   38267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:32:34.833464   38267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42297
	I0505 21:32:34.833886   38267 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:32:34.834326   38267 main.go:141] libmachine: Using API Version  1
	I0505 21:32:34.834345   38267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:32:34.834663   38267 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:32:34.834866   38267 main.go:141] libmachine: (ha-322980-m02) Calling .DriverName
	I0505 21:32:34.835026   38267 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0505 21:32:34.835047   38267 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHHostname
	I0505 21:32:34.838002   38267 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:32:34.838464   38267 main.go:141] libmachine: (ha-322980-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:59:b4", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:28:40 +0000 UTC Type:0 Mac:52:54:00:91:59:b4 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-322980-m02 Clientid:01:52:54:00:91:59:b4}
	I0505 21:32:34.838504   38267 main.go:141] libmachine: (ha-322980-m02) DBG | domain ha-322980-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:91:59:b4 in network mk-ha-322980
	I0505 21:32:34.838632   38267 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHPort
	I0505 21:32:34.838805   38267 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHKeyPath
	I0505 21:32:34.838942   38267 main.go:141] libmachine: (ha-322980-m02) Calling .GetSSHUsername
	I0505 21:32:34.839080   38267 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m02/id_rsa Username:docker}
	W0505 21:32:53.211788   38267 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.228:22: connect: no route to host
	W0505 21:32:53.211896   38267 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.228:22: connect: no route to host
	E0505 21:32:53.211919   38267 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.228:22: connect: no route to host
	I0505 21:32:53.211927   38267 status.go:257] ha-322980-m02 status: &{Name:ha-322980-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0505 21:32:53.211954   38267 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.228:22: connect: no route to host
	I0505 21:32:53.211962   38267 status.go:255] checking status of ha-322980-m04 ...
	I0505 21:32:53.212384   38267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:32:53.212459   38267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:32:53.227369   38267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35443
	I0505 21:32:53.227804   38267 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:32:53.228265   38267 main.go:141] libmachine: Using API Version  1
	I0505 21:32:53.228284   38267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:32:53.228627   38267 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:32:53.228837   38267 main.go:141] libmachine: (ha-322980-m04) Calling .GetState
	I0505 21:32:53.230255   38267 status.go:330] ha-322980-m04 host status = "Stopped" (err=<nil>)
	I0505 21:32:53.230269   38267 status.go:343] host is not running, skipping remaining checks
	I0505 21:32:53.230277   38267 status.go:257] ha-322980-m04 status: &{Name:ha-322980-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:546: status says there are running hosts: args "out/minikube-linux-amd64 -p ha-322980 status -v=7 --alsologtostderr": ha-322980
type: Control Plane
host: Running
kubelet: Running
apiserver: Stopped
kubeconfig: Configured

                                                
                                                
ha-322980-m02
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-322980-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-322980 status -v=7 --alsologtostderr": ha-322980
type: Control Plane
host: Running
kubelet: Running
apiserver: Stopped
kubeconfig: Configured

                                                
                                                
ha-322980-m02
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-322980-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-linux-amd64 -p ha-322980 status -v=7 --alsologtostderr": ha-322980
type: Control Plane
host: Running
kubelet: Running
apiserver: Stopped
kubeconfig: Configured

                                                
                                                
ha-322980-m02
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-322980-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-322980 -n ha-322980
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-322980 -n ha-322980: exit status 2 (15.629120607s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-322980 logs -n 25: (1.575670464s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-322980 ssh -n ha-322980-m02 sudo cat                                          | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | /home/docker/cp-test_ha-322980-m03_ha-322980-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-322980 cp ha-322980-m03:/home/docker/cp-test.txt                              | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m04:/home/docker/cp-test_ha-322980-m03_ha-322980-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n                                                                 | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n ha-322980-m04 sudo cat                                          | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | /home/docker/cp-test_ha-322980-m03_ha-322980-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-322980 cp testdata/cp-test.txt                                                | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n                                                                 | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-322980 cp ha-322980-m04:/home/docker/cp-test.txt                              | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3379972730/001/cp-test_ha-322980-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n                                                                 | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-322980 cp ha-322980-m04:/home/docker/cp-test.txt                              | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980:/home/docker/cp-test_ha-322980-m04_ha-322980.txt                       |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n                                                                 | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n ha-322980 sudo cat                                              | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | /home/docker/cp-test_ha-322980-m04_ha-322980.txt                                 |           |         |         |                     |                     |
	| cp      | ha-322980 cp ha-322980-m04:/home/docker/cp-test.txt                              | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m02:/home/docker/cp-test_ha-322980-m04_ha-322980-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n                                                                 | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n ha-322980-m02 sudo cat                                          | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | /home/docker/cp-test_ha-322980-m04_ha-322980-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-322980 cp ha-322980-m04:/home/docker/cp-test.txt                              | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m03:/home/docker/cp-test_ha-322980-m04_ha-322980-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n                                                                 | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n ha-322980-m03 sudo cat                                          | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | /home/docker/cp-test_ha-322980-m04_ha-322980-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-322980 node stop m02 -v=7                                                     | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-322980 node start m02 -v=7                                                    | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:23 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-322980 -v=7                                                           | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:24 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-322980 -v=7                                                                | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:24 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-322980 --wait=true -v=7                                                    | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:26 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-322980                                                                | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:30 UTC |                     |
	| node    | ha-322980 node delete m03 -v=7                                                   | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:30 UTC | 05 May 24 21:30 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-322980 stop -v=7                                                              | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:30 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/05 21:26:53
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0505 21:26:53.140232   36399 out.go:291] Setting OutFile to fd 1 ...
	I0505 21:26:53.140470   36399 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 21:26:53.140481   36399 out.go:304] Setting ErrFile to fd 2...
	I0505 21:26:53.140485   36399 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 21:26:53.140670   36399 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18602-11466/.minikube/bin
	I0505 21:26:53.141198   36399 out.go:298] Setting JSON to false
	I0505 21:26:53.142084   36399 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4160,"bootTime":1714940253,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0505 21:26:53.142153   36399 start.go:139] virtualization: kvm guest
	I0505 21:26:53.144497   36399 out.go:177] * [ha-322980] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0505 21:26:53.146260   36399 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 21:26:53.146193   36399 notify.go:220] Checking for updates...
	I0505 21:26:53.148784   36399 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 21:26:53.150106   36399 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18602-11466/kubeconfig
	I0505 21:26:53.151383   36399 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18602-11466/.minikube
	I0505 21:26:53.152533   36399 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0505 21:26:53.153673   36399 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 21:26:53.155327   36399 config.go:182] Loaded profile config "ha-322980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 21:26:53.155445   36399 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 21:26:53.155966   36399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:26:53.156031   36399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:26:53.171200   36399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44147
	I0505 21:26:53.171619   36399 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:26:53.172129   36399 main.go:141] libmachine: Using API Version  1
	I0505 21:26:53.172150   36399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:26:53.172473   36399 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:26:53.172681   36399 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:26:53.208543   36399 out.go:177] * Using the kvm2 driver based on existing profile
	I0505 21:26:53.209967   36399 start.go:297] selected driver: kvm2
	I0505 21:26:53.209989   36399 start.go:901] validating driver "kvm2" against &{Name:ha-322980 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.0 ClusterName:ha-322980 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.178 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.29 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.169 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 21:26:53.210123   36399 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 21:26:53.210493   36399 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 21:26:53.210573   36399 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18602-11466/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0505 21:26:53.224851   36399 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0505 21:26:53.225522   36399 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0505 21:26:53.225581   36399 cni.go:84] Creating CNI manager for ""
	I0505 21:26:53.225592   36399 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0505 21:26:53.225643   36399 start.go:340] cluster config:
	{Name:ha-322980 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-322980 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.178 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.29 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.169 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 21:26:53.225764   36399 iso.go:125] acquiring lock: {Name:mk05a37c5afd8d748706c016c1665e3846f7161e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 21:26:53.228370   36399 out.go:177] * Starting "ha-322980" primary control-plane node in "ha-322980" cluster
	I0505 21:26:53.230047   36399 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0505 21:26:53.230086   36399 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18602-11466/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0505 21:26:53.230093   36399 cache.go:56] Caching tarball of preloaded images
	I0505 21:26:53.230188   36399 preload.go:173] Found /home/jenkins/minikube-integration/18602-11466/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0505 21:26:53.230200   36399 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0505 21:26:53.230314   36399 profile.go:143] Saving config to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/config.json ...
	I0505 21:26:53.230520   36399 start.go:360] acquireMachinesLock for ha-322980: {Name:mk7f653ea73351d572a4896c6b37d2e7aa44b4ac Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 21:26:53.230568   36399 start.go:364] duration metric: took 30.264µs to acquireMachinesLock for "ha-322980"
	I0505 21:26:53.230584   36399 start.go:96] Skipping create...Using existing machine configuration
	I0505 21:26:53.230594   36399 fix.go:54] fixHost starting: 
	I0505 21:26:53.230851   36399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:26:53.230880   36399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:26:53.244841   36399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39917
	I0505 21:26:53.245311   36399 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:26:53.245787   36399 main.go:141] libmachine: Using API Version  1
	I0505 21:26:53.245816   36399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:26:53.246134   36399 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:26:53.246309   36399 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:26:53.246459   36399 main.go:141] libmachine: (ha-322980) Calling .GetState
	I0505 21:26:53.248132   36399 fix.go:112] recreateIfNeeded on ha-322980: state=Running err=<nil>
	W0505 21:26:53.248160   36399 fix.go:138] unexpected machine state, will restart: <nil>
	I0505 21:26:53.251264   36399 out.go:177] * Updating the running kvm2 "ha-322980" VM ...
	I0505 21:26:53.252511   36399 machine.go:94] provisionDockerMachine start ...
	I0505 21:26:53.252536   36399 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:26:53.252737   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:26:53.255085   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:26:53.255500   36399 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:26:53.255526   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:26:53.255681   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:26:53.255852   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:26:53.256000   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:26:53.256133   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:26:53.256288   36399 main.go:141] libmachine: Using SSH client type: native
	I0505 21:26:53.256537   36399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0505 21:26:53.256551   36399 main.go:141] libmachine: About to run SSH command:
	hostname
	I0505 21:26:53.369308   36399 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-322980
	
	I0505 21:26:53.369346   36399 main.go:141] libmachine: (ha-322980) Calling .GetMachineName
	I0505 21:26:53.369606   36399 buildroot.go:166] provisioning hostname "ha-322980"
	I0505 21:26:53.369639   36399 main.go:141] libmachine: (ha-322980) Calling .GetMachineName
	I0505 21:26:53.369820   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:26:53.372637   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:26:53.373124   36399 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:26:53.373151   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:26:53.373370   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:26:53.373567   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:26:53.373735   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:26:53.373877   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:26:53.374056   36399 main.go:141] libmachine: Using SSH client type: native
	I0505 21:26:53.374277   36399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0505 21:26:53.374294   36399 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-322980 && echo "ha-322980" | sudo tee /etc/hostname
	I0505 21:26:53.506808   36399 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-322980
	
	I0505 21:26:53.506842   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:26:53.509223   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:26:53.509600   36399 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:26:53.509626   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:26:53.509814   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:26:53.509985   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:26:53.510157   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:26:53.510289   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:26:53.510416   36399 main.go:141] libmachine: Using SSH client type: native
	I0505 21:26:53.510579   36399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0505 21:26:53.510595   36399 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-322980' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-322980/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-322980' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0505 21:26:53.629485   36399 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0505 21:26:53.629511   36399 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18602-11466/.minikube CaCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18602-11466/.minikube}
	I0505 21:26:53.629528   36399 buildroot.go:174] setting up certificates
	I0505 21:26:53.629535   36399 provision.go:84] configureAuth start
	I0505 21:26:53.629551   36399 main.go:141] libmachine: (ha-322980) Calling .GetMachineName
	I0505 21:26:53.629801   36399 main.go:141] libmachine: (ha-322980) Calling .GetIP
	I0505 21:26:53.632716   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:26:53.633088   36399 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:26:53.633131   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:26:53.633288   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:26:53.635715   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:26:53.636140   36399 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:26:53.636167   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:26:53.636330   36399 provision.go:143] copyHostCerts
	I0505 21:26:53.636361   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem
	I0505 21:26:53.636406   36399 exec_runner.go:144] found /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem, removing ...
	I0505 21:26:53.636418   36399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem
	I0505 21:26:53.636502   36399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem (1078 bytes)
	I0505 21:26:53.636618   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem
	I0505 21:26:53.636644   36399 exec_runner.go:144] found /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem, removing ...
	I0505 21:26:53.636654   36399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem
	I0505 21:26:53.636691   36399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem (1123 bytes)
	I0505 21:26:53.636765   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem
	I0505 21:26:53.636795   36399 exec_runner.go:144] found /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem, removing ...
	I0505 21:26:53.636805   36399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem
	I0505 21:26:53.636837   36399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem (1675 bytes)
	I0505 21:26:53.636954   36399 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem org=jenkins.ha-322980 san=[127.0.0.1 192.168.39.178 ha-322980 localhost minikube]
	I0505 21:26:53.769238   36399 provision.go:177] copyRemoteCerts
	I0505 21:26:53.769301   36399 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0505 21:26:53.769337   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:26:53.772321   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:26:53.772662   36399 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:26:53.772698   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:26:53.772861   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:26:53.773067   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:26:53.773321   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:26:53.773466   36399 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa Username:docker}
	I0505 21:26:53.859548   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0505 21:26:53.859622   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0505 21:26:53.890248   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0505 21:26:53.890322   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0505 21:26:53.919935   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0505 21:26:53.919995   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0505 21:26:53.952579   36399 provision.go:87] duration metric: took 323.032938ms to configureAuth
	I0505 21:26:53.952610   36399 buildroot.go:189] setting minikube options for container-runtime
	I0505 21:26:53.952915   36399 config.go:182] Loaded profile config "ha-322980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 21:26:53.952991   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:26:53.955785   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:26:53.956181   36399 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:26:53.956212   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:26:53.956489   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:26:53.956663   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:26:53.956856   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:26:53.957020   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:26:53.957195   36399 main.go:141] libmachine: Using SSH client type: native
	I0505 21:26:53.957360   36399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0505 21:26:53.957381   36399 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0505 21:28:24.802156   36399 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0505 21:28:24.802179   36399 machine.go:97] duration metric: took 1m31.549649754s to provisionDockerMachine
	I0505 21:28:24.802191   36399 start.go:293] postStartSetup for "ha-322980" (driver="kvm2")
	I0505 21:28:24.802201   36399 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0505 21:28:24.802219   36399 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:28:24.802523   36399 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0505 21:28:24.802541   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:28:24.805857   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:28:24.806374   36399 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:28:24.806400   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:28:24.806574   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:28:24.806774   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:28:24.806947   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:28:24.807068   36399 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa Username:docker}
	I0505 21:28:24.897937   36399 ssh_runner.go:195] Run: cat /etc/os-release
	I0505 21:28:24.902998   36399 info.go:137] Remote host: Buildroot 2023.02.9
	I0505 21:28:24.903020   36399 filesync.go:126] Scanning /home/jenkins/minikube-integration/18602-11466/.minikube/addons for local assets ...
	I0505 21:28:24.903069   36399 filesync.go:126] Scanning /home/jenkins/minikube-integration/18602-11466/.minikube/files for local assets ...
	I0505 21:28:24.903140   36399 filesync.go:149] local asset: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem -> 187982.pem in /etc/ssl/certs
	I0505 21:28:24.903156   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem -> /etc/ssl/certs/187982.pem
	I0505 21:28:24.903230   36399 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0505 21:28:24.914976   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem --> /etc/ssl/certs/187982.pem (1708 bytes)
	I0505 21:28:24.942422   36399 start.go:296] duration metric: took 140.219842ms for postStartSetup
	I0505 21:28:24.942466   36399 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:28:24.942795   36399 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0505 21:28:24.942828   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:28:24.945241   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:28:24.945698   36399 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:28:24.945723   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:28:24.945879   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:28:24.946049   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:28:24.946187   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:28:24.946343   36399 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa Username:docker}
	W0505 21:28:25.031258   36399 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0505 21:28:25.031281   36399 fix.go:56] duration metric: took 1m31.80069046s for fixHost
	I0505 21:28:25.031302   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:28:25.033882   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:28:25.034222   36399 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:28:25.034253   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:28:25.034384   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:28:25.034608   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:28:25.034808   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:28:25.034979   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:28:25.035177   36399 main.go:141] libmachine: Using SSH client type: native
	I0505 21:28:25.035393   36399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0505 21:28:25.035405   36399 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0505 21:28:25.145055   36399 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714944505.115925429
	
	I0505 21:28:25.145080   36399 fix.go:216] guest clock: 1714944505.115925429
	I0505 21:28:25.145089   36399 fix.go:229] Guest: 2024-05-05 21:28:25.115925429 +0000 UTC Remote: 2024-05-05 21:28:25.031289392 +0000 UTC m=+91.939181071 (delta=84.636037ms)
	I0505 21:28:25.145109   36399 fix.go:200] guest clock delta is within tolerance: 84.636037ms
	I0505 21:28:25.145114   36399 start.go:83] releasing machines lock for "ha-322980", held for 1m31.914536671s
	I0505 21:28:25.145132   36399 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:28:25.145355   36399 main.go:141] libmachine: (ha-322980) Calling .GetIP
	I0505 21:28:25.147953   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:28:25.148359   36399 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:28:25.148378   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:28:25.148549   36399 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:28:25.149031   36399 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:28:25.149206   36399 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:28:25.149302   36399 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0505 21:28:25.149351   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:28:25.149450   36399 ssh_runner.go:195] Run: cat /version.json
	I0505 21:28:25.149476   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:28:25.152099   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:28:25.152175   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:28:25.152532   36399 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:28:25.152556   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:28:25.152579   36399 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:28:25.152591   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:28:25.152718   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:28:25.152853   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:28:25.152916   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:28:25.152986   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:28:25.153044   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:28:25.153100   36399 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:28:25.153155   36399 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa Username:docker}
	I0505 21:28:25.153222   36399 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa Username:docker}
	I0505 21:28:25.262146   36399 ssh_runner.go:195] Run: systemctl --version
	I0505 21:28:25.269585   36399 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0505 21:28:25.445107   36399 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0505 21:28:25.452093   36399 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0505 21:28:25.452159   36399 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0505 21:28:25.462054   36399 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0505 21:28:25.462081   36399 start.go:494] detecting cgroup driver to use...
	I0505 21:28:25.462145   36399 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0505 21:28:25.479385   36399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0505 21:28:25.493826   36399 docker.go:217] disabling cri-docker service (if available) ...
	I0505 21:28:25.493881   36399 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0505 21:28:25.508310   36399 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0505 21:28:25.522866   36399 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0505 21:28:25.681241   36399 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0505 21:28:25.837193   36399 docker.go:233] disabling docker service ...
	I0505 21:28:25.837273   36399 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0505 21:28:25.854654   36399 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0505 21:28:25.869168   36399 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0505 21:28:26.021077   36399 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0505 21:28:26.172560   36399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0505 21:28:26.187950   36399 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0505 21:28:26.209945   36399 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0505 21:28:26.210011   36399 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:28:26.221767   36399 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0505 21:28:26.221821   36399 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:28:26.233242   36399 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:28:26.244526   36399 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:28:26.255938   36399 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0505 21:28:26.269084   36399 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:28:26.280325   36399 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:28:26.293020   36399 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:28:26.303829   36399 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0505 21:28:26.314019   36399 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0505 21:28:26.324025   36399 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 21:28:26.475013   36399 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0505 21:28:26.786010   36399 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0505 21:28:26.786082   36399 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0505 21:28:26.791904   36399 start.go:562] Will wait 60s for crictl version
	I0505 21:28:26.791958   36399 ssh_runner.go:195] Run: which crictl
	I0505 21:28:26.796301   36399 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0505 21:28:26.839834   36399 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0505 21:28:26.839910   36399 ssh_runner.go:195] Run: crio --version
	I0505 21:28:26.872417   36399 ssh_runner.go:195] Run: crio --version
	I0505 21:28:26.905097   36399 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0505 21:28:26.906534   36399 main.go:141] libmachine: (ha-322980) Calling .GetIP
	I0505 21:28:26.909264   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:28:26.909627   36399 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:28:26.909642   36399 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:28:26.909860   36399 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0505 21:28:26.915241   36399 kubeadm.go:877] updating cluster {Name:ha-322980 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cl
usterName:ha-322980 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.178 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.29 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.169 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0505 21:28:26.915374   36399 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0505 21:28:26.915433   36399 ssh_runner.go:195] Run: sudo crictl images --output json
	I0505 21:28:26.965243   36399 crio.go:514] all images are preloaded for cri-o runtime.
	I0505 21:28:26.965271   36399 crio.go:433] Images already preloaded, skipping extraction
	I0505 21:28:26.965342   36399 ssh_runner.go:195] Run: sudo crictl images --output json
	I0505 21:28:27.008398   36399 crio.go:514] all images are preloaded for cri-o runtime.
	I0505 21:28:27.008421   36399 cache_images.go:84] Images are preloaded, skipping loading
	I0505 21:28:27.008433   36399 kubeadm.go:928] updating node { 192.168.39.178 8443 v1.30.0 crio true true} ...
	I0505 21:28:27.008545   36399 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-322980 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.178
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-322980 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0505 21:28:27.008627   36399 ssh_runner.go:195] Run: crio config
	I0505 21:28:27.062535   36399 cni.go:84] Creating CNI manager for ""
	I0505 21:28:27.062560   36399 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0505 21:28:27.062572   36399 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0505 21:28:27.062601   36399 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.178 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-322980 NodeName:ha-322980 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.178"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.178 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0505 21:28:27.062742   36399 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.178
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-322980"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.178
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.178"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0505 21:28:27.062764   36399 kube-vip.go:111] generating kube-vip config ...
	I0505 21:28:27.062801   36399 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0505 21:28:27.076515   36399 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0505 21:28:27.076654   36399 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0505 21:28:27.076721   36399 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0505 21:28:27.087275   36399 binaries.go:44] Found k8s binaries, skipping transfer
	I0505 21:28:27.087332   36399 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0505 21:28:27.097140   36399 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0505 21:28:27.115596   36399 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0505 21:28:27.133989   36399 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0505 21:28:27.152325   36399 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0505 21:28:27.171626   36399 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0505 21:28:27.176255   36399 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 21:28:27.333712   36399 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0505 21:28:27.351006   36399 certs.go:68] Setting up /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980 for IP: 192.168.39.178
	I0505 21:28:27.351031   36399 certs.go:194] generating shared ca certs ...
	I0505 21:28:27.351047   36399 certs.go:226] acquiring lock for ca certs: {Name:mk9a8f97724855697074b08194c6247f8f9e5c84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 21:28:27.351203   36399 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key
	I0505 21:28:27.351247   36399 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key
	I0505 21:28:27.351256   36399 certs.go:256] generating profile certs ...
	I0505 21:28:27.351322   36399 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/client.key
	I0505 21:28:27.351349   36399 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key.418ec019
	I0505 21:28:27.351360   36399 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt.418ec019 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.178 192.168.39.228 192.168.39.29 192.168.39.254]
	I0505 21:28:27.773033   36399 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt.418ec019 ...
	I0505 21:28:27.773068   36399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt.418ec019: {Name:mk074feb2c078ad2537bc4b0f4572ad95bc07b4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 21:28:27.773263   36399 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key.418ec019 ...
	I0505 21:28:27.773277   36399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key.418ec019: {Name:mk2665c22bdd3135504eab2bc878577f3cbff151 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 21:28:27.773371   36399 certs.go:381] copying /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt.418ec019 -> /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt
	I0505 21:28:27.773505   36399 certs.go:385] copying /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key.418ec019 -> /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key
	I0505 21:28:27.773631   36399 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.key
	I0505 21:28:27.773646   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0505 21:28:27.773658   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0505 21:28:27.773671   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0505 21:28:27.773683   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0505 21:28:27.773695   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0505 21:28:27.773707   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0505 21:28:27.773719   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0505 21:28:27.773731   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0505 21:28:27.773773   36399 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798.pem (1338 bytes)
	W0505 21:28:27.773800   36399 certs.go:480] ignoring /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798_empty.pem, impossibly tiny 0 bytes
	I0505 21:28:27.773809   36399 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem (1675 bytes)
	I0505 21:28:27.773829   36399 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem (1078 bytes)
	I0505 21:28:27.773850   36399 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem (1123 bytes)
	I0505 21:28:27.773870   36399 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem (1675 bytes)
	I0505 21:28:27.773905   36399 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem (1708 bytes)
	I0505 21:28:27.773929   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0505 21:28:27.773943   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798.pem -> /usr/share/ca-certificates/18798.pem
	I0505 21:28:27.773955   36399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem -> /usr/share/ca-certificates/187982.pem
	I0505 21:28:27.774493   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0505 21:28:27.804503   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0505 21:28:27.830821   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0505 21:28:27.858720   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0505 21:28:27.886328   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0505 21:28:27.912918   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0505 21:28:27.940090   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0505 21:28:27.967530   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0505 21:28:27.994650   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0505 21:28:28.022349   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798.pem --> /usr/share/ca-certificates/18798.pem (1338 bytes)
	I0505 21:28:28.049290   36399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem --> /usr/share/ca-certificates/187982.pem (1708 bytes)
	I0505 21:28:28.075642   36399 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0505 21:28:28.094413   36399 ssh_runner.go:195] Run: openssl version
	I0505 21:28:28.101667   36399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18798.pem && ln -fs /usr/share/ca-certificates/18798.pem /etc/ssl/certs/18798.pem"
	I0505 21:28:28.114593   36399 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18798.pem
	I0505 21:28:28.119911   36399 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  5 21:11 /usr/share/ca-certificates/18798.pem
	I0505 21:28:28.119966   36399 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18798.pem
	I0505 21:28:28.126513   36399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18798.pem /etc/ssl/certs/51391683.0"
	I0505 21:28:28.136871   36399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/187982.pem && ln -fs /usr/share/ca-certificates/187982.pem /etc/ssl/certs/187982.pem"
	I0505 21:28:28.148896   36399 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/187982.pem
	I0505 21:28:28.154099   36399 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  5 21:11 /usr/share/ca-certificates/187982.pem
	I0505 21:28:28.154153   36399 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/187982.pem
	I0505 21:28:28.160414   36399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/187982.pem /etc/ssl/certs/3ec20f2e.0"
	I0505 21:28:28.171000   36399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0505 21:28:28.184015   36399 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0505 21:28:28.189022   36399 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  5 20:58 /usr/share/ca-certificates/minikubeCA.pem
	I0505 21:28:28.189068   36399 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0505 21:28:28.196002   36399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0505 21:28:28.206271   36399 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0505 21:28:28.211552   36399 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0505 21:28:28.218198   36399 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0505 21:28:28.224606   36399 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0505 21:28:28.230931   36399 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0505 21:28:28.237169   36399 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0505 21:28:28.243293   36399 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0505 21:28:28.249553   36399 kubeadm.go:391] StartCluster: {Name:ha-322980 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clust
erName:ha-322980 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.178 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.29 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.169 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 21:28:28.249672   36399 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0505 21:28:28.249724   36399 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0505 21:28:28.296303   36399 cri.go:89] found id: "e643f88ce68e29460e940448779ea8b8b309d24d97a13d57fe0b3139f920999a"
	I0505 21:28:28.296320   36399 cri.go:89] found id: "31d5340e9679504cad0e8fc998a460f07a03ad902d57ee2dea4946953cbad32d"
	I0505 21:28:28.296324   36399 cri.go:89] found id: "e6747aa9368ee1e6895cb4bf1eed8173977dc9bddfc0ea1b03750a3d23697184"
	I0505 21:28:28.296327   36399 cri.go:89] found id: "7894a12a0cfac62f67b7770ea3e5c8dbc28723b9c7c40b415fcdcf36899ac17d"
	I0505 21:28:28.296330   36399 cri.go:89] found id: "8f325a9ea25d6ff0517a638bff175fe1f4c646916941e4d3a93f5ff6f13f0187"
	I0505 21:28:28.296333   36399 cri.go:89] found id: "0b360d142570dfeb8a0253680ccfd97861e90b700b1ce9e2e5b315244aed2a3b"
	I0505 21:28:28.296335   36399 cri.go:89] found id: "e065fafa4b7aad342c5755ddbb2c69c3b4bd0efd44c9ce792388bc6c6f06121d"
	I0505 21:28:28.296338   36399 cri.go:89] found id: "63d1d40ce592576b3c3adab70629f977f025cc822b6fc2638f0afd5a8034b355"
	I0505 21:28:28.296340   36399 cri.go:89] found id: "4da23c67204614119fd12bb11bb2dcc384609fe399a32f06e2ca61ff52a8438c"
	I0505 21:28:28.296347   36399 cri.go:89] found id: "abf4aae19a40108f61080f90924e47cd17198b595de08afa53c852acb001992f"
	I0505 21:28:28.296349   36399 cri.go:89] found id: "d73ef383ce1ab83dd2b2ada78891a4aa3835de2b4805157ac3e15584d0b4b29b"
	I0505 21:28:28.296353   36399 cri.go:89] found id: "b13d21aa2e8e79d9186e7d57f6c9dbcafdde76053f5205ee5d1bb46c65960d4f"
	I0505 21:28:28.296359   36399 cri.go:89] found id: "97769959b22d6e891106c90e8ae981d40cb02ac960e23bdfb007b3c50d50c923"
	I0505 21:28:28.296363   36399 cri.go:89] found id: "6ebcc8c1017ed40ef18eba191c6a6df12e34304aa025d65f0340c08e107ac43d"
	I0505 21:28:28.296369   36399 cri.go:89] found id: ""
	I0505 21:28:28.296419   36399 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	May 05 21:33:09 ha-322980 crio[3885]: time="2024-05-05 21:33:09.265449151Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714944789265422461,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d33e4603-725b-45fa-be05-b0a84af3d69c name=/runtime.v1.ImageService/ImageFsInfo
	May 05 21:33:09 ha-322980 crio[3885]: time="2024-05-05 21:33:09.266158978Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=80f7cb31-97b0-45a7-bf5b-e959ab6432b4 name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:33:09 ha-322980 crio[3885]: time="2024-05-05 21:33:09.266215519Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=80f7cb31-97b0-45a7-bf5b-e959ab6432b4 name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:33:09 ha-322980 crio[3885]: time="2024-05-05 21:33:09.266591285Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:355a3bf6a6f151e9f26eb348a08353e45581aa4cee3a2aa154b9e12ee27a638c,PodSandboxId:64801e377a379e3032b60fb4e259d498edbd40bb44d06239b6567a8489f97eef,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:5,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714944741390676974,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lwtnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4033535e-69f1-426c-bb17-831fad6336d5,},Annotations:map[string]string{io.kubernetes.container.hash: 5400a271,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94752b251c71a06932b73db8104e820e473d1d4494c78884ffe58ad4eb867d3b,PodSandboxId:e684baf5ef11a979a8c30779f4f6f64f196ac4dbc0c9b122734952ac705ce180,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714944729019639584,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25cdcec1c37ba86157b0b42297dfe2cf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf39325,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,
io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b1fc54998e0d9f8e8908df7b252d9046c75539a9f8a7b3052965e4689c68bb7,PodSandboxId:8e6a479fdea9d5e48225f93fa886dde1f176a498a35aebb5d75e093f2ca98e92,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714944630237576006,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4b10859196db0958fa2b1c992ad5e8a,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMess
agePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d64f6490c58bcaabe8dcfd1f6189e1e9b6ea82598e341f43024f2a406807c0bd,PodSandboxId:68ad3ff729cb28bf9363bf24198ca022f340f699f2cf73be3068a1a5c78df7fd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714944604405013234,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc212ac3-7499-4edc-b5a5-622b0bd4a891,},Annotations:map[string]string{io.kubernetes.container.hash: c207764,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fi
le,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e5582057ffa7695636c3c49a29af71b47c3e0e49d6d4f28aed3cb84503b54e,PodSandboxId:64801e377a379e3032b60fb4e259d498edbd40bb44d06239b6567a8489f97eef,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714944583390637724,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lwtnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4033535e-69f1-426c-bb17-831fad6336d5,},Annotations:map[string]string{io.kubernetes.container.hash: 5400a271,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGra
cePeriod: 30,},},&Container{Id:b48ee84cd3ceb82bcfda671b85eb4b1b2793a18718f5c445445feaf19173b3d9,PodSandboxId:95e07dfd571487a2a6cf7710a0ca46ae125d20737d36ddd7c9a44fb93a9c51a9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714944558414000718,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 578ccf60a9d00c195d5069c63fb0b319,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:0c012cc95d188bdded0cf101970a7bcf34d1c2860b11847a187db706e95a0138,PodSandboxId:68ad3ff729cb28bf9363bf24198ca022f340f699f2cf73be3068a1a5c78df7fd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714944550398281276,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc212ac3-7499-4edc-b5a5-622b0bd4a891,},Annotations:map[string]string{io.kubernetes.container.hash: c207764,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod
: 30,},},&Container{Id:378349efe1d23552d4480f47eb5b8f2985a9b9afe0582e8a15ae7bd295575d7f,PodSandboxId:9dfb38e6022a77884fc7f07c6d48d0ba1ff1031b9546850ba613c1a67c6eefa0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714944545718355104,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xt9l5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bbde9685-4494-40b7-bd53-9452fd970f5a,},Annotations:map[string]string{io.kubernetes.container.hash: ce6d6b7c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea2d43ee9b97e
09ed5e375d1d8edb724b3d836888d6a9fcd5ccb75b32e5d1424,PodSandboxId:8e6a479fdea9d5e48225f93fa886dde1f176a498a35aebb5d75e093f2ca98e92,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1714944524505500895,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4b10859196db0958fa2b1c992ad5e8a,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:852f56752c6434b6c374dd0257366bf739e210aee07f80277d637
76fd528299b,PodSandboxId:e36e99eaa4a61ccc966411e5a4efcee570acc74235b8c9da766e66844f3e432c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714944512450633114,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xdzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0b6492d-c0f5-45dd-8482-c447b81daa66,},Annotations:map[string]string{io.kubernetes.container.hash: 9b4684e6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:067837019b5f60ac2139d714d999a7c2c585578da4dc3279ac134cd69b882db6,PodSandboxId:d0370265
c798af275a606870dfff8f094028af5afdb5a22c6825e212c9d3197f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714944512881106094,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fqt45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27bdadca-f49c-4f50-b09c-07dd6067f39a,},Annotations:map[string]string{io.kubernetes.container.hash: 8fa26f22,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06be80792a085f9dea48d4219f306149664769dd55a7c6bc24d984254df5fc7d,PodSandboxId:95e07dfd571487a2a6cf7710a0ca46ae125d20737d36ddd7c9a44fb93a9c51a9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714944512620038370,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 578ccf60a9d00c195d5069c63fb0b319,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:858ab02f25618649e61d3cb75cf0fbd9a3bfbbbd2a9556b0257647ec91b0c2b8,PodSandboxId:cd2a674999e8ab59ec8a5f057e40ce67c67724010b62a201c6abe9e71f6a3a30,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714944512601051435,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-78zmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e066e3ad-0574-44f9-acab-d7cec8b86788,},Annotations:map[string]string{io.kubernetes.container.hash: 3cdac550,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protoc
ol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d864b4fda0bb945209c47a2404e8a012f5dac000a178c964de8c1e8cc8cb9a9f,PodSandboxId:4777f05174b29b55270c89c82d4189567c9b871410dfc1754a18984bcfbff1a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714944512361230033,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c588feae7d6204945d27bedaf4541d64,},Annotati
ons:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:366a7799ffc65796c54bbea26b2f81953df8a6b0fd8399e51c1f3cf1f374ff2a,PodSandboxId:55b2bc86d17b354ac6e14cdd0acbe830b855d4ee17bd24922dff0f39203830a4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714944512349273773,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58f12977082107510fdbb696cd218155,},Annotations:map[string]string{io.kubernetes.container.hash:
a7c6c285,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9743f3da0de5672bc067b03d1bf5a1bd2b516c0135ee81ae43c0d2cab9bcfdf,PodSandboxId:238b5b24a572eba24b52bf72fafa80a3a1105acc4ba0c3e58b255a5c6a8a7bc2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714943992555012726,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xt9l5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bbde9685-4494-40b7-bd53-9452fd970f5a,},Annotations:map[string]string{io.kubernetes.container.hash: c
e6d6b7c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e065fafa4b7aad342c5755ddbb2c69c3b4bd0efd44c9ce792388bc6c6f06121d,PodSandboxId:9f56aff0e5f86dabc75e935bba2e8f81a5f7d30f2613ed055fe026fab10ff612,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714943788390149653,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-78zmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e066e3ad-0574-44f9-acab-d7cec8b86788,},Annotations:map[string]string{io.kubernetes.container.hash: 3cdac550,io.kubernetes.container.
ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b360d142570dfeb8a0253680ccfd97861e90b700b1ce9e2e5b315244aed2a3b,PodSandboxId:cd560b1055b35f9f6500b5eef53e7f8250f4bc20a4f8e6562e8b30bff6235e38,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714943788394548523,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fqt45,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27bdadca-f49c-4f50-b09c-07dd6067f39a,},Annotations:map[string]string{io.kubernetes.container.hash: 8fa26f22,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4da23c67204614119fd12bb11bb2dcc384609fe399a32f06e2ca61ff52a8438c,PodSandboxId:8b3a42343ade0d10dbd7caf52ae4583f67790d020c606261ab52cb7ceda9cd0a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebc
f29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714943786049319375,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xdzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0b6492d-c0f5-45dd-8482-c447b81daa66,},Annotations:map[string]string{io.kubernetes.container.hash: 9b4684e6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d73ef383ce1ab83dd2b2ada78891a4aa3835de2b4805157ac3e15584d0b4b29b,PodSandboxId:913466e1710aa2d1fa8e45f17fd08f82c114b7242bff697822c1bf5d2e0b7a3c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,
State:CONTAINER_EXITED,CreatedAt:1714943765197403698,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c588feae7d6204945d27bedaf4541d64,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97769959b22d6e891106c90e8ae981d40cb02ac960e23bdfb007b3c50d50c923,PodSandboxId:01d81d8dc3bcbbd11e9a335f2a7aee378da1d273b127ffeb0f72b10080cb6bab,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt
:1714943765081696946,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58f12977082107510fdbb696cd218155,},Annotations:map[string]string{io.kubernetes.container.hash: a7c6c285,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=80f7cb31-97b0-45a7-bf5b-e959ab6432b4 name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:33:09 ha-322980 crio[3885]: time="2024-05-05 21:33:09.312058663Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=028a1621-d1cd-413d-93ad-4f5bc3f917e8 name=/runtime.v1.RuntimeService/Version
	May 05 21:33:09 ha-322980 crio[3885]: time="2024-05-05 21:33:09.312166780Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=028a1621-d1cd-413d-93ad-4f5bc3f917e8 name=/runtime.v1.RuntimeService/Version
	May 05 21:33:09 ha-322980 crio[3885]: time="2024-05-05 21:33:09.313404692Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1737d9f8-a82e-4ab8-8c1b-aaf1b9c19f05 name=/runtime.v1.ImageService/ImageFsInfo
	May 05 21:33:09 ha-322980 crio[3885]: time="2024-05-05 21:33:09.313968455Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714944789313942309,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1737d9f8-a82e-4ab8-8c1b-aaf1b9c19f05 name=/runtime.v1.ImageService/ImageFsInfo
	May 05 21:33:09 ha-322980 crio[3885]: time="2024-05-05 21:33:09.314463359Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=493426a4-9c64-4d73-a7dc-377aef7f10fe name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:33:09 ha-322980 crio[3885]: time="2024-05-05 21:33:09.314555735Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=493426a4-9c64-4d73-a7dc-377aef7f10fe name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:33:09 ha-322980 crio[3885]: time="2024-05-05 21:33:09.315034735Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:355a3bf6a6f151e9f26eb348a08353e45581aa4cee3a2aa154b9e12ee27a638c,PodSandboxId:64801e377a379e3032b60fb4e259d498edbd40bb44d06239b6567a8489f97eef,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:5,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714944741390676974,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lwtnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4033535e-69f1-426c-bb17-831fad6336d5,},Annotations:map[string]string{io.kubernetes.container.hash: 5400a271,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94752b251c71a06932b73db8104e820e473d1d4494c78884ffe58ad4eb867d3b,PodSandboxId:e684baf5ef11a979a8c30779f4f6f64f196ac4dbc0c9b122734952ac705ce180,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714944729019639584,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25cdcec1c37ba86157b0b42297dfe2cf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf39325,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,
io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b1fc54998e0d9f8e8908df7b252d9046c75539a9f8a7b3052965e4689c68bb7,PodSandboxId:8e6a479fdea9d5e48225f93fa886dde1f176a498a35aebb5d75e093f2ca98e92,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714944630237576006,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4b10859196db0958fa2b1c992ad5e8a,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMess
agePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d64f6490c58bcaabe8dcfd1f6189e1e9b6ea82598e341f43024f2a406807c0bd,PodSandboxId:68ad3ff729cb28bf9363bf24198ca022f340f699f2cf73be3068a1a5c78df7fd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714944604405013234,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc212ac3-7499-4edc-b5a5-622b0bd4a891,},Annotations:map[string]string{io.kubernetes.container.hash: c207764,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fi
le,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e5582057ffa7695636c3c49a29af71b47c3e0e49d6d4f28aed3cb84503b54e,PodSandboxId:64801e377a379e3032b60fb4e259d498edbd40bb44d06239b6567a8489f97eef,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714944583390637724,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lwtnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4033535e-69f1-426c-bb17-831fad6336d5,},Annotations:map[string]string{io.kubernetes.container.hash: 5400a271,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGra
cePeriod: 30,},},&Container{Id:b48ee84cd3ceb82bcfda671b85eb4b1b2793a18718f5c445445feaf19173b3d9,PodSandboxId:95e07dfd571487a2a6cf7710a0ca46ae125d20737d36ddd7c9a44fb93a9c51a9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714944558414000718,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 578ccf60a9d00c195d5069c63fb0b319,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:0c012cc95d188bdded0cf101970a7bcf34d1c2860b11847a187db706e95a0138,PodSandboxId:68ad3ff729cb28bf9363bf24198ca022f340f699f2cf73be3068a1a5c78df7fd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714944550398281276,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc212ac3-7499-4edc-b5a5-622b0bd4a891,},Annotations:map[string]string{io.kubernetes.container.hash: c207764,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod
: 30,},},&Container{Id:378349efe1d23552d4480f47eb5b8f2985a9b9afe0582e8a15ae7bd295575d7f,PodSandboxId:9dfb38e6022a77884fc7f07c6d48d0ba1ff1031b9546850ba613c1a67c6eefa0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714944545718355104,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xt9l5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bbde9685-4494-40b7-bd53-9452fd970f5a,},Annotations:map[string]string{io.kubernetes.container.hash: ce6d6b7c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea2d43ee9b97e
09ed5e375d1d8edb724b3d836888d6a9fcd5ccb75b32e5d1424,PodSandboxId:8e6a479fdea9d5e48225f93fa886dde1f176a498a35aebb5d75e093f2ca98e92,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1714944524505500895,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4b10859196db0958fa2b1c992ad5e8a,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:852f56752c6434b6c374dd0257366bf739e210aee07f80277d637
76fd528299b,PodSandboxId:e36e99eaa4a61ccc966411e5a4efcee570acc74235b8c9da766e66844f3e432c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714944512450633114,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xdzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0b6492d-c0f5-45dd-8482-c447b81daa66,},Annotations:map[string]string{io.kubernetes.container.hash: 9b4684e6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:067837019b5f60ac2139d714d999a7c2c585578da4dc3279ac134cd69b882db6,PodSandboxId:d0370265
c798af275a606870dfff8f094028af5afdb5a22c6825e212c9d3197f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714944512881106094,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fqt45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27bdadca-f49c-4f50-b09c-07dd6067f39a,},Annotations:map[string]string{io.kubernetes.container.hash: 8fa26f22,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06be80792a085f9dea48d4219f306149664769dd55a7c6bc24d984254df5fc7d,PodSandboxId:95e07dfd571487a2a6cf7710a0ca46ae125d20737d36ddd7c9a44fb93a9c51a9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714944512620038370,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 578ccf60a9d00c195d5069c63fb0b319,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:858ab02f25618649e61d3cb75cf0fbd9a3bfbbbd2a9556b0257647ec91b0c2b8,PodSandboxId:cd2a674999e8ab59ec8a5f057e40ce67c67724010b62a201c6abe9e71f6a3a30,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714944512601051435,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-78zmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e066e3ad-0574-44f9-acab-d7cec8b86788,},Annotations:map[string]string{io.kubernetes.container.hash: 3cdac550,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protoc
ol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d864b4fda0bb945209c47a2404e8a012f5dac000a178c964de8c1e8cc8cb9a9f,PodSandboxId:4777f05174b29b55270c89c82d4189567c9b871410dfc1754a18984bcfbff1a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714944512361230033,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c588feae7d6204945d27bedaf4541d64,},Annotati
ons:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:366a7799ffc65796c54bbea26b2f81953df8a6b0fd8399e51c1f3cf1f374ff2a,PodSandboxId:55b2bc86d17b354ac6e14cdd0acbe830b855d4ee17bd24922dff0f39203830a4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714944512349273773,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58f12977082107510fdbb696cd218155,},Annotations:map[string]string{io.kubernetes.container.hash:
a7c6c285,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9743f3da0de5672bc067b03d1bf5a1bd2b516c0135ee81ae43c0d2cab9bcfdf,PodSandboxId:238b5b24a572eba24b52bf72fafa80a3a1105acc4ba0c3e58b255a5c6a8a7bc2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714943992555012726,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xt9l5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bbde9685-4494-40b7-bd53-9452fd970f5a,},Annotations:map[string]string{io.kubernetes.container.hash: c
e6d6b7c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e065fafa4b7aad342c5755ddbb2c69c3b4bd0efd44c9ce792388bc6c6f06121d,PodSandboxId:9f56aff0e5f86dabc75e935bba2e8f81a5f7d30f2613ed055fe026fab10ff612,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714943788390149653,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-78zmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e066e3ad-0574-44f9-acab-d7cec8b86788,},Annotations:map[string]string{io.kubernetes.container.hash: 3cdac550,io.kubernetes.container.
ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b360d142570dfeb8a0253680ccfd97861e90b700b1ce9e2e5b315244aed2a3b,PodSandboxId:cd560b1055b35f9f6500b5eef53e7f8250f4bc20a4f8e6562e8b30bff6235e38,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714943788394548523,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fqt45,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27bdadca-f49c-4f50-b09c-07dd6067f39a,},Annotations:map[string]string{io.kubernetes.container.hash: 8fa26f22,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4da23c67204614119fd12bb11bb2dcc384609fe399a32f06e2ca61ff52a8438c,PodSandboxId:8b3a42343ade0d10dbd7caf52ae4583f67790d020c606261ab52cb7ceda9cd0a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebc
f29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714943786049319375,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xdzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0b6492d-c0f5-45dd-8482-c447b81daa66,},Annotations:map[string]string{io.kubernetes.container.hash: 9b4684e6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d73ef383ce1ab83dd2b2ada78891a4aa3835de2b4805157ac3e15584d0b4b29b,PodSandboxId:913466e1710aa2d1fa8e45f17fd08f82c114b7242bff697822c1bf5d2e0b7a3c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,
State:CONTAINER_EXITED,CreatedAt:1714943765197403698,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c588feae7d6204945d27bedaf4541d64,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97769959b22d6e891106c90e8ae981d40cb02ac960e23bdfb007b3c50d50c923,PodSandboxId:01d81d8dc3bcbbd11e9a335f2a7aee378da1d273b127ffeb0f72b10080cb6bab,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt
:1714943765081696946,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58f12977082107510fdbb696cd218155,},Annotations:map[string]string{io.kubernetes.container.hash: a7c6c285,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=493426a4-9c64-4d73-a7dc-377aef7f10fe name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:33:09 ha-322980 crio[3885]: time="2024-05-05 21:33:09.361021571Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1f71a9dc-0b4e-43fe-b355-3a657ada6bc9 name=/runtime.v1.RuntimeService/Version
	May 05 21:33:09 ha-322980 crio[3885]: time="2024-05-05 21:33:09.361121462Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1f71a9dc-0b4e-43fe-b355-3a657ada6bc9 name=/runtime.v1.RuntimeService/Version
	May 05 21:33:09 ha-322980 crio[3885]: time="2024-05-05 21:33:09.362467757Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0f438a87-c2d3-4622-890d-77bcde4f20ee name=/runtime.v1.ImageService/ImageFsInfo
	May 05 21:33:09 ha-322980 crio[3885]: time="2024-05-05 21:33:09.363006370Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714944789362973026,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0f438a87-c2d3-4622-890d-77bcde4f20ee name=/runtime.v1.ImageService/ImageFsInfo
	May 05 21:33:09 ha-322980 crio[3885]: time="2024-05-05 21:33:09.363589175Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ba2d5295-217c-438e-9681-0c7a417d75bd name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:33:09 ha-322980 crio[3885]: time="2024-05-05 21:33:09.363676752Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ba2d5295-217c-438e-9681-0c7a417d75bd name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:33:09 ha-322980 crio[3885]: time="2024-05-05 21:33:09.364192268Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:355a3bf6a6f151e9f26eb348a08353e45581aa4cee3a2aa154b9e12ee27a638c,PodSandboxId:64801e377a379e3032b60fb4e259d498edbd40bb44d06239b6567a8489f97eef,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:5,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714944741390676974,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lwtnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4033535e-69f1-426c-bb17-831fad6336d5,},Annotations:map[string]string{io.kubernetes.container.hash: 5400a271,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94752b251c71a06932b73db8104e820e473d1d4494c78884ffe58ad4eb867d3b,PodSandboxId:e684baf5ef11a979a8c30779f4f6f64f196ac4dbc0c9b122734952ac705ce180,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714944729019639584,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25cdcec1c37ba86157b0b42297dfe2cf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf39325,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,
io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b1fc54998e0d9f8e8908df7b252d9046c75539a9f8a7b3052965e4689c68bb7,PodSandboxId:8e6a479fdea9d5e48225f93fa886dde1f176a498a35aebb5d75e093f2ca98e92,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714944630237576006,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4b10859196db0958fa2b1c992ad5e8a,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMess
agePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d64f6490c58bcaabe8dcfd1f6189e1e9b6ea82598e341f43024f2a406807c0bd,PodSandboxId:68ad3ff729cb28bf9363bf24198ca022f340f699f2cf73be3068a1a5c78df7fd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714944604405013234,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc212ac3-7499-4edc-b5a5-622b0bd4a891,},Annotations:map[string]string{io.kubernetes.container.hash: c207764,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fi
le,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e5582057ffa7695636c3c49a29af71b47c3e0e49d6d4f28aed3cb84503b54e,PodSandboxId:64801e377a379e3032b60fb4e259d498edbd40bb44d06239b6567a8489f97eef,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714944583390637724,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lwtnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4033535e-69f1-426c-bb17-831fad6336d5,},Annotations:map[string]string{io.kubernetes.container.hash: 5400a271,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGra
cePeriod: 30,},},&Container{Id:b48ee84cd3ceb82bcfda671b85eb4b1b2793a18718f5c445445feaf19173b3d9,PodSandboxId:95e07dfd571487a2a6cf7710a0ca46ae125d20737d36ddd7c9a44fb93a9c51a9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714944558414000718,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 578ccf60a9d00c195d5069c63fb0b319,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:0c012cc95d188bdded0cf101970a7bcf34d1c2860b11847a187db706e95a0138,PodSandboxId:68ad3ff729cb28bf9363bf24198ca022f340f699f2cf73be3068a1a5c78df7fd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714944550398281276,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc212ac3-7499-4edc-b5a5-622b0bd4a891,},Annotations:map[string]string{io.kubernetes.container.hash: c207764,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod
: 30,},},&Container{Id:378349efe1d23552d4480f47eb5b8f2985a9b9afe0582e8a15ae7bd295575d7f,PodSandboxId:9dfb38e6022a77884fc7f07c6d48d0ba1ff1031b9546850ba613c1a67c6eefa0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714944545718355104,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xt9l5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bbde9685-4494-40b7-bd53-9452fd970f5a,},Annotations:map[string]string{io.kubernetes.container.hash: ce6d6b7c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea2d43ee9b97e
09ed5e375d1d8edb724b3d836888d6a9fcd5ccb75b32e5d1424,PodSandboxId:8e6a479fdea9d5e48225f93fa886dde1f176a498a35aebb5d75e093f2ca98e92,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1714944524505500895,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4b10859196db0958fa2b1c992ad5e8a,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:852f56752c6434b6c374dd0257366bf739e210aee07f80277d637
76fd528299b,PodSandboxId:e36e99eaa4a61ccc966411e5a4efcee570acc74235b8c9da766e66844f3e432c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714944512450633114,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xdzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0b6492d-c0f5-45dd-8482-c447b81daa66,},Annotations:map[string]string{io.kubernetes.container.hash: 9b4684e6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:067837019b5f60ac2139d714d999a7c2c585578da4dc3279ac134cd69b882db6,PodSandboxId:d0370265
c798af275a606870dfff8f094028af5afdb5a22c6825e212c9d3197f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714944512881106094,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fqt45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27bdadca-f49c-4f50-b09c-07dd6067f39a,},Annotations:map[string]string{io.kubernetes.container.hash: 8fa26f22,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06be80792a085f9dea48d4219f306149664769dd55a7c6bc24d984254df5fc7d,PodSandboxId:95e07dfd571487a2a6cf7710a0ca46ae125d20737d36ddd7c9a44fb93a9c51a9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714944512620038370,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 578ccf60a9d00c195d5069c63fb0b319,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:858ab02f25618649e61d3cb75cf0fbd9a3bfbbbd2a9556b0257647ec91b0c2b8,PodSandboxId:cd2a674999e8ab59ec8a5f057e40ce67c67724010b62a201c6abe9e71f6a3a30,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714944512601051435,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-78zmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e066e3ad-0574-44f9-acab-d7cec8b86788,},Annotations:map[string]string{io.kubernetes.container.hash: 3cdac550,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protoc
ol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d864b4fda0bb945209c47a2404e8a012f5dac000a178c964de8c1e8cc8cb9a9f,PodSandboxId:4777f05174b29b55270c89c82d4189567c9b871410dfc1754a18984bcfbff1a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714944512361230033,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c588feae7d6204945d27bedaf4541d64,},Annotati
ons:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:366a7799ffc65796c54bbea26b2f81953df8a6b0fd8399e51c1f3cf1f374ff2a,PodSandboxId:55b2bc86d17b354ac6e14cdd0acbe830b855d4ee17bd24922dff0f39203830a4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714944512349273773,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58f12977082107510fdbb696cd218155,},Annotations:map[string]string{io.kubernetes.container.hash:
a7c6c285,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9743f3da0de5672bc067b03d1bf5a1bd2b516c0135ee81ae43c0d2cab9bcfdf,PodSandboxId:238b5b24a572eba24b52bf72fafa80a3a1105acc4ba0c3e58b255a5c6a8a7bc2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714943992555012726,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xt9l5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bbde9685-4494-40b7-bd53-9452fd970f5a,},Annotations:map[string]string{io.kubernetes.container.hash: c
e6d6b7c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e065fafa4b7aad342c5755ddbb2c69c3b4bd0efd44c9ce792388bc6c6f06121d,PodSandboxId:9f56aff0e5f86dabc75e935bba2e8f81a5f7d30f2613ed055fe026fab10ff612,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714943788390149653,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-78zmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e066e3ad-0574-44f9-acab-d7cec8b86788,},Annotations:map[string]string{io.kubernetes.container.hash: 3cdac550,io.kubernetes.container.
ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b360d142570dfeb8a0253680ccfd97861e90b700b1ce9e2e5b315244aed2a3b,PodSandboxId:cd560b1055b35f9f6500b5eef53e7f8250f4bc20a4f8e6562e8b30bff6235e38,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714943788394548523,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fqt45,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27bdadca-f49c-4f50-b09c-07dd6067f39a,},Annotations:map[string]string{io.kubernetes.container.hash: 8fa26f22,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4da23c67204614119fd12bb11bb2dcc384609fe399a32f06e2ca61ff52a8438c,PodSandboxId:8b3a42343ade0d10dbd7caf52ae4583f67790d020c606261ab52cb7ceda9cd0a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebc
f29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714943786049319375,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xdzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0b6492d-c0f5-45dd-8482-c447b81daa66,},Annotations:map[string]string{io.kubernetes.container.hash: 9b4684e6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d73ef383ce1ab83dd2b2ada78891a4aa3835de2b4805157ac3e15584d0b4b29b,PodSandboxId:913466e1710aa2d1fa8e45f17fd08f82c114b7242bff697822c1bf5d2e0b7a3c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,
State:CONTAINER_EXITED,CreatedAt:1714943765197403698,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c588feae7d6204945d27bedaf4541d64,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97769959b22d6e891106c90e8ae981d40cb02ac960e23bdfb007b3c50d50c923,PodSandboxId:01d81d8dc3bcbbd11e9a335f2a7aee378da1d273b127ffeb0f72b10080cb6bab,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt
:1714943765081696946,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58f12977082107510fdbb696cd218155,},Annotations:map[string]string{io.kubernetes.container.hash: a7c6c285,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ba2d5295-217c-438e-9681-0c7a417d75bd name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:33:09 ha-322980 crio[3885]: time="2024-05-05 21:33:09.411628256Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=68656fe9-1045-4feb-b62d-2d9314f155fe name=/runtime.v1.RuntimeService/Version
	May 05 21:33:09 ha-322980 crio[3885]: time="2024-05-05 21:33:09.411706052Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=68656fe9-1045-4feb-b62d-2d9314f155fe name=/runtime.v1.RuntimeService/Version
	May 05 21:33:09 ha-322980 crio[3885]: time="2024-05-05 21:33:09.413032678Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0017f5e8-ccf7-4db8-8317-bda0d7492336 name=/runtime.v1.ImageService/ImageFsInfo
	May 05 21:33:09 ha-322980 crio[3885]: time="2024-05-05 21:33:09.413427203Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714944789413403525,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0017f5e8-ccf7-4db8-8317-bda0d7492336 name=/runtime.v1.ImageService/ImageFsInfo
	May 05 21:33:09 ha-322980 crio[3885]: time="2024-05-05 21:33:09.414168026Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5e647248-b51c-4896-a5e9-895c85afedb9 name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:33:09 ha-322980 crio[3885]: time="2024-05-05 21:33:09.414251165Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5e647248-b51c-4896-a5e9-895c85afedb9 name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:33:09 ha-322980 crio[3885]: time="2024-05-05 21:33:09.414644566Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:355a3bf6a6f151e9f26eb348a08353e45581aa4cee3a2aa154b9e12ee27a638c,PodSandboxId:64801e377a379e3032b60fb4e259d498edbd40bb44d06239b6567a8489f97eef,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:5,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714944741390676974,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lwtnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4033535e-69f1-426c-bb17-831fad6336d5,},Annotations:map[string]string{io.kubernetes.container.hash: 5400a271,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94752b251c71a06932b73db8104e820e473d1d4494c78884ffe58ad4eb867d3b,PodSandboxId:e684baf5ef11a979a8c30779f4f6f64f196ac4dbc0c9b122734952ac705ce180,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714944729019639584,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25cdcec1c37ba86157b0b42297dfe2cf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf39325,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,
io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b1fc54998e0d9f8e8908df7b252d9046c75539a9f8a7b3052965e4689c68bb7,PodSandboxId:8e6a479fdea9d5e48225f93fa886dde1f176a498a35aebb5d75e093f2ca98e92,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714944630237576006,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4b10859196db0958fa2b1c992ad5e8a,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMess
agePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d64f6490c58bcaabe8dcfd1f6189e1e9b6ea82598e341f43024f2a406807c0bd,PodSandboxId:68ad3ff729cb28bf9363bf24198ca022f340f699f2cf73be3068a1a5c78df7fd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714944604405013234,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc212ac3-7499-4edc-b5a5-622b0bd4a891,},Annotations:map[string]string{io.kubernetes.container.hash: c207764,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fi
le,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e5582057ffa7695636c3c49a29af71b47c3e0e49d6d4f28aed3cb84503b54e,PodSandboxId:64801e377a379e3032b60fb4e259d498edbd40bb44d06239b6567a8489f97eef,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714944583390637724,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lwtnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4033535e-69f1-426c-bb17-831fad6336d5,},Annotations:map[string]string{io.kubernetes.container.hash: 5400a271,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGra
cePeriod: 30,},},&Container{Id:b48ee84cd3ceb82bcfda671b85eb4b1b2793a18718f5c445445feaf19173b3d9,PodSandboxId:95e07dfd571487a2a6cf7710a0ca46ae125d20737d36ddd7c9a44fb93a9c51a9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714944558414000718,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 578ccf60a9d00c195d5069c63fb0b319,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:0c012cc95d188bdded0cf101970a7bcf34d1c2860b11847a187db706e95a0138,PodSandboxId:68ad3ff729cb28bf9363bf24198ca022f340f699f2cf73be3068a1a5c78df7fd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714944550398281276,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc212ac3-7499-4edc-b5a5-622b0bd4a891,},Annotations:map[string]string{io.kubernetes.container.hash: c207764,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod
: 30,},},&Container{Id:378349efe1d23552d4480f47eb5b8f2985a9b9afe0582e8a15ae7bd295575d7f,PodSandboxId:9dfb38e6022a77884fc7f07c6d48d0ba1ff1031b9546850ba613c1a67c6eefa0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714944545718355104,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xt9l5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bbde9685-4494-40b7-bd53-9452fd970f5a,},Annotations:map[string]string{io.kubernetes.container.hash: ce6d6b7c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea2d43ee9b97e
09ed5e375d1d8edb724b3d836888d6a9fcd5ccb75b32e5d1424,PodSandboxId:8e6a479fdea9d5e48225f93fa886dde1f176a498a35aebb5d75e093f2ca98e92,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1714944524505500895,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4b10859196db0958fa2b1c992ad5e8a,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:852f56752c6434b6c374dd0257366bf739e210aee07f80277d637
76fd528299b,PodSandboxId:e36e99eaa4a61ccc966411e5a4efcee570acc74235b8c9da766e66844f3e432c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714944512450633114,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xdzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0b6492d-c0f5-45dd-8482-c447b81daa66,},Annotations:map[string]string{io.kubernetes.container.hash: 9b4684e6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:067837019b5f60ac2139d714d999a7c2c585578da4dc3279ac134cd69b882db6,PodSandboxId:d0370265
c798af275a606870dfff8f094028af5afdb5a22c6825e212c9d3197f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714944512881106094,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fqt45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27bdadca-f49c-4f50-b09c-07dd6067f39a,},Annotations:map[string]string{io.kubernetes.container.hash: 8fa26f22,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06be80792a085f9dea48d4219f306149664769dd55a7c6bc24d984254df5fc7d,PodSandboxId:95e07dfd571487a2a6cf7710a0ca46ae125d20737d36ddd7c9a44fb93a9c51a9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714944512620038370,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 578ccf60a9d00c195d5069c63fb0b319,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:858ab02f25618649e61d3cb75cf0fbd9a3bfbbbd2a9556b0257647ec91b0c2b8,PodSandboxId:cd2a674999e8ab59ec8a5f057e40ce67c67724010b62a201c6abe9e71f6a3a30,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714944512601051435,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-78zmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e066e3ad-0574-44f9-acab-d7cec8b86788,},Annotations:map[string]string{io.kubernetes.container.hash: 3cdac550,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protoc
ol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d864b4fda0bb945209c47a2404e8a012f5dac000a178c964de8c1e8cc8cb9a9f,PodSandboxId:4777f05174b29b55270c89c82d4189567c9b871410dfc1754a18984bcfbff1a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714944512361230033,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c588feae7d6204945d27bedaf4541d64,},Annotati
ons:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:366a7799ffc65796c54bbea26b2f81953df8a6b0fd8399e51c1f3cf1f374ff2a,PodSandboxId:55b2bc86d17b354ac6e14cdd0acbe830b855d4ee17bd24922dff0f39203830a4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714944512349273773,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58f12977082107510fdbb696cd218155,},Annotations:map[string]string{io.kubernetes.container.hash:
a7c6c285,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9743f3da0de5672bc067b03d1bf5a1bd2b516c0135ee81ae43c0d2cab9bcfdf,PodSandboxId:238b5b24a572eba24b52bf72fafa80a3a1105acc4ba0c3e58b255a5c6a8a7bc2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714943992555012726,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xt9l5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bbde9685-4494-40b7-bd53-9452fd970f5a,},Annotations:map[string]string{io.kubernetes.container.hash: c
e6d6b7c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e065fafa4b7aad342c5755ddbb2c69c3b4bd0efd44c9ce792388bc6c6f06121d,PodSandboxId:9f56aff0e5f86dabc75e935bba2e8f81a5f7d30f2613ed055fe026fab10ff612,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714943788390149653,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-78zmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e066e3ad-0574-44f9-acab-d7cec8b86788,},Annotations:map[string]string{io.kubernetes.container.hash: 3cdac550,io.kubernetes.container.
ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b360d142570dfeb8a0253680ccfd97861e90b700b1ce9e2e5b315244aed2a3b,PodSandboxId:cd560b1055b35f9f6500b5eef53e7f8250f4bc20a4f8e6562e8b30bff6235e38,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714943788394548523,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fqt45,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27bdadca-f49c-4f50-b09c-07dd6067f39a,},Annotations:map[string]string{io.kubernetes.container.hash: 8fa26f22,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4da23c67204614119fd12bb11bb2dcc384609fe399a32f06e2ca61ff52a8438c,PodSandboxId:8b3a42343ade0d10dbd7caf52ae4583f67790d020c606261ab52cb7ceda9cd0a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebc
f29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714943786049319375,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xdzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0b6492d-c0f5-45dd-8482-c447b81daa66,},Annotations:map[string]string{io.kubernetes.container.hash: 9b4684e6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d73ef383ce1ab83dd2b2ada78891a4aa3835de2b4805157ac3e15584d0b4b29b,PodSandboxId:913466e1710aa2d1fa8e45f17fd08f82c114b7242bff697822c1bf5d2e0b7a3c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,
State:CONTAINER_EXITED,CreatedAt:1714943765197403698,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c588feae7d6204945d27bedaf4541d64,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97769959b22d6e891106c90e8ae981d40cb02ac960e23bdfb007b3c50d50c923,PodSandboxId:01d81d8dc3bcbbd11e9a335f2a7aee378da1d273b127ffeb0f72b10080cb6bab,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt
:1714943765081696946,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58f12977082107510fdbb696cd218155,},Annotations:map[string]string{io.kubernetes.container.hash: a7c6c285,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5e647248-b51c-4896-a5e9-895c85afedb9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	355a3bf6a6f15       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      48 seconds ago       Running             kindnet-cni               5                   64801e377a379       kindnet-lwtnx
	94752b251c71a       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      About a minute ago   Exited              kube-apiserver            4                   e684baf5ef11a       kube-apiserver-ha-322980
	8b1fc54998e0d       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      2 minutes ago        Running             kube-vip                  1                   8e6a479fdea9d       kube-vip-ha-322980
	d64f6490c58bc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago        Running             storage-provisioner       4                   68ad3ff729cb2       storage-provisioner
	d8e5582057ffa       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      3 minutes ago        Exited              kindnet-cni               4                   64801e377a379       kindnet-lwtnx
	b48ee84cd3ceb       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      3 minutes ago        Running             kube-controller-manager   2                   95e07dfd57148       kube-controller-manager-ha-322980
	0c012cc95d188       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago        Exited              storage-provisioner       3                   68ad3ff729cb2       storage-provisioner
	378349efe1d23       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago        Running             busybox                   1                   9dfb38e6022a7       busybox-fc5497c4f-xt9l5
	ea2d43ee9b97e       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      4 minutes ago        Exited              kube-vip                  0                   8e6a479fdea9d       kube-vip-ha-322980
	067837019b5f6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago        Running             coredns                   1                   d0370265c798a       coredns-7db6d8ff4d-fqt45
	06be80792a085       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      4 minutes ago        Exited              kube-controller-manager   1                   95e07dfd57148       kube-controller-manager-ha-322980
	858ab02f25618       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago        Running             coredns                   1                   cd2a674999e8a       coredns-7db6d8ff4d-78zmw
	852f56752c643       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      4 minutes ago        Running             kube-proxy                1                   e36e99eaa4a61       kube-proxy-8xdzd
	d864b4fda0bb9       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      4 minutes ago        Running             kube-scheduler            1                   4777f05174b29       kube-scheduler-ha-322980
	366a7799ffc65       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      4 minutes ago        Running             etcd                      1                   55b2bc86d17b3       etcd-ha-322980
	d9743f3da0de5       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago       Exited              busybox                   0                   238b5b24a572e       busybox-fc5497c4f-xt9l5
	0b360d142570d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago       Exited              coredns                   0                   cd560b1055b35       coredns-7db6d8ff4d-fqt45
	e065fafa4b7aa       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago       Exited              coredns                   0                   9f56aff0e5f86       coredns-7db6d8ff4d-78zmw
	4da23c6720461       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      16 minutes ago       Exited              kube-proxy                0                   8b3a42343ade0       kube-proxy-8xdzd
	d73ef383ce1ab       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      17 minutes ago       Exited              kube-scheduler            0                   913466e1710aa       kube-scheduler-ha-322980
	97769959b22d6       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      17 minutes ago       Exited              etcd                      0                   01d81d8dc3bcb       etcd-ha-322980
	
	
	==> coredns [067837019b5f60ac2139d714d999a7c2c585578da4dc3279ac134cd69b882db6] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[107334747]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (05-May-2024 21:32:41.835) (total time: 11345ms):
	Trace[107334747]: ---"Objects listed" error:Unauthorized 11345ms (21:32:53.181)
	Trace[107334747]: [11.34598316s] [11.34598316s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: Trace[1004464973]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (05-May-2024 21:32:42.360) (total time: 10821ms):
	Trace[1004464973]: ---"Objects listed" error:Unauthorized 10821ms (21:32:53.182)
	Trace[1004464973]: [10.821370545s] [10.821370545s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[1722544533]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (05-May-2024 21:32:41.298) (total time: 11884ms):
	Trace[1722544533]: ---"Objects listed" error:Unauthorized 11884ms (21:32:53.182)
	Trace[1722544533]: [11.884116329s] [11.884116329s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: Trace[1580830706]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (05-May-2024 21:32:56.655) (total time: 10534ms):
	Trace[1580830706]: ---"Objects listed" error:Unauthorized 10534ms (21:33:07.190)
	Trace[1580830706]: [10.534715096s] [10.534715096s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	
	
	==> coredns [0b360d142570dfeb8a0253680ccfd97861e90b700b1ce9e2e5b315244aed2a3b] <==
	[INFO] 10.244.1.2:51278 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00017965s
	[INFO] 10.244.1.2:37849 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000301689s
	[INFO] 10.244.0.4:58808 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118281s
	[INFO] 10.244.0.4:59347 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000074943s
	[INFO] 10.244.0.4:44264 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000127442s
	[INFO] 10.244.0.4:45870 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001035173s
	[INFO] 10.244.0.4:45397 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000126149s
	[INFO] 10.244.2.2:38985 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000241724s
	[INFO] 10.244.1.2:41200 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000185837s
	[INFO] 10.244.0.4:53459 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000188027s
	[INFO] 10.244.0.4:43760 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000146395s
	[INFO] 10.244.2.2:45375 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000112163s
	[INFO] 10.244.2.2:60638 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000225418s
	[INFO] 10.244.1.2:33012 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000251463s
	[INFO] 10.244.0.4:48613 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000079688s
	[INFO] 10.244.0.4:54870 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000050324s
	[INFO] 10.244.0.4:36700 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000167489s
	[INFO] 10.244.0.4:56859 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000077358s
	[INFO] 10.244.2.2:37153 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122063s
	[INFO] 10.244.2.2:43717 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000123902s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [858ab02f25618649e61d3cb75cf0fbd9a3bfbbbd2a9556b0257647ec91b0c2b8] <==
	Trace[1087262691]: [12.177832165s] [12.177832165s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: Trace[2094402806]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (05-May-2024 21:32:42.233) (total time: 10949ms):
	Trace[2094402806]: ---"Objects listed" error:Unauthorized 10949ms (21:32:53.182)
	Trace[2094402806]: [10.949609309s] [10.949609309s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[718903134]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (05-May-2024 21:32:41.764) (total time: 11420ms):
	Trace[718903134]: ---"Objects listed" error:Unauthorized 11419ms (21:32:53.184)
	Trace[718903134]: [11.42063193s] [11.42063193s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[717541916]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (05-May-2024 21:32:56.427) (total time: 10831ms):
	Trace[717541916]: ---"Objects listed" error:Unauthorized 10831ms (21:33:07.259)
	Trace[717541916]: [10.83175781s] [10.83175781s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[ERROR] plugin/kubernetes: Unexpected error when reading response body: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: unexpected error when reading response body. Please retry. Original error: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: Trace[1786829321]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (05-May-2024 21:32:56.656) (total time: 10617ms):
	Trace[1786829321]: ---"Objects listed" error:unexpected error when reading response body. Please retry. Original error: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug="" 10617ms (21:33:07.274)
	Trace[1786829321]: [10.617187248s] [10.617187248s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: unexpected error when reading response body. Please retry. Original error: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	
	
	==> coredns [e065fafa4b7aad342c5755ddbb2c69c3b4bd0efd44c9ce792388bc6c6f06121d] <==
	[INFO] 10.244.0.4:43928 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00004146s
	[INFO] 10.244.2.2:44358 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001832218s
	[INFO] 10.244.2.2:34081 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00017944s
	[INFO] 10.244.2.2:36047 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000087749s
	[INFO] 10.244.2.2:60557 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001143135s
	[INFO] 10.244.2.2:60835 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000073052s
	[INFO] 10.244.2.2:42876 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000093376s
	[INFO] 10.244.2.2:33057 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000070619s
	[INFO] 10.244.1.2:41910 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00009436s
	[INFO] 10.244.1.2:43839 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082555s
	[INFO] 10.244.1.2:39008 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000075851s
	[INFO] 10.244.0.4:47500 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000110566s
	[INFO] 10.244.0.4:44728 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071752s
	[INFO] 10.244.2.2:38205 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000222144s
	[INFO] 10.244.2.2:46321 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000164371s
	[INFO] 10.244.1.2:41080 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000205837s
	[INFO] 10.244.1.2:58822 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000264144s
	[INFO] 10.244.1.2:55995 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000174393s
	[INFO] 10.244.2.2:46471 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00069286s
	[INFO] 10.244.2.2:52414 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000163744s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +8.501831] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.064246] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066779] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.227983] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.115503] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.299594] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +5.048468] systemd-fstab-generator[777]: Ignoring "noauto" option for root device
	[  +0.072016] kauditd_printk_skb: 130 callbacks suppressed
	[May 5 21:16] systemd-fstab-generator[961]: Ignoring "noauto" option for root device
	[  +0.935027] kauditd_printk_skb: 57 callbacks suppressed
	[  +9.150561] systemd-fstab-generator[1376]: Ignoring "noauto" option for root device
	[  +0.089537] kauditd_printk_skb: 40 callbacks suppressed
	[ +11.653864] kauditd_printk_skb: 21 callbacks suppressed
	[May 5 21:18] kauditd_printk_skb: 74 callbacks suppressed
	[May 5 21:28] systemd-fstab-generator[3803]: Ignoring "noauto" option for root device
	[  +0.163899] systemd-fstab-generator[3815]: Ignoring "noauto" option for root device
	[  +0.174337] systemd-fstab-generator[3829]: Ignoring "noauto" option for root device
	[  +0.161075] systemd-fstab-generator[3841]: Ignoring "noauto" option for root device
	[  +0.301050] systemd-fstab-generator[3869]: Ignoring "noauto" option for root device
	[  +0.856611] systemd-fstab-generator[3972]: Ignoring "noauto" option for root device
	[  +4.601668] kauditd_printk_skb: 122 callbacks suppressed
	[ +12.029599] kauditd_printk_skb: 86 callbacks suppressed
	[ +11.080916] kauditd_printk_skb: 1 callbacks suppressed
	[May 5 21:29] kauditd_printk_skb: 1 callbacks suppressed
	[  +8.083309] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [366a7799ffc65796c54bbea26b2f81953df8a6b0fd8399e51c1f3cf1f374ff2a] <==
	2024/05/05 21:33:07 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-05-05T21:33:07.292698Z","caller":"traceutil/trace.go:171","msg":"trace[1520075531] range","detail":"{range_begin:/registry/mutatingwebhookconfigurations/; range_end:/registry/mutatingwebhookconfigurations0; }","duration":"6.103726127s","start":"2024-05-05T21:33:01.188969Z","end":"2024-05-05T21:33:07.292695Z","steps":["trace[1520075531] 'agreement among raft nodes before linearized reading'  (duration: 6.103705122s)"],"step_count":1}
	{"level":"warn","ts":"2024-05-05T21:33:07.295619Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-05T21:33:01.188966Z","time spent":"6.106643829s","remote":"127.0.0.1:50894","response type":"/etcdserverpb.KV/Range","request count":0,"request size":87,"response count":0,"response size":0,"request content":"key:\"/registry/mutatingwebhookconfigurations/\" range_end:\"/registry/mutatingwebhookconfigurations0\" limit:10000 "}
	2024/05/05 21:33:07 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-05-05T21:33:07.295742Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"6.111902421s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" limit:10000 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-05-05T21:33:07.296062Z","caller":"traceutil/trace.go:171","msg":"trace[1631844125] range","detail":"{range_begin:/registry/priorityclasses/; range_end:/registry/priorityclasses0; }","duration":"6.11230673s","start":"2024-05-05T21:33:01.183748Z","end":"2024-05-05T21:33:07.296054Z","steps":["trace[1631844125] 'agreement among raft nodes before linearized reading'  (duration: 6.111984558s)"],"step_count":1}
	{"level":"warn","ts":"2024-05-05T21:33:07.296186Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-05T21:33:01.183744Z","time spent":"6.112431851s","remote":"127.0.0.1:50776","response type":"/etcdserverpb.KV/Range","request count":0,"request size":59,"response count":0,"response size":0,"request content":"key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" limit:10000 "}
	{"level":"warn","ts":"2024-05-05T21:33:07.297162Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"6.110082467s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/prioritylevelconfigurations/\" range_end:\"/registry/prioritylevelconfigurations0\" limit:10000 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-05-05T21:33:07.297386Z","caller":"traceutil/trace.go:171","msg":"trace[804660133] range","detail":"{range_begin:/registry/prioritylevelconfigurations/; range_end:/registry/prioritylevelconfigurations0; }","duration":"6.110225471s","start":"2024-05-05T21:33:01.187061Z","end":"2024-05-05T21:33:07.297286Z","steps":["trace[804660133] 'agreement among raft nodes before linearized reading'  (duration: 6.11008825s)"],"step_count":1}
	{"level":"warn","ts":"2024-05-05T21:33:07.297514Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-05T21:33:01.187057Z","time spent":"6.110445613s","remote":"127.0.0.1:50812","response type":"/etcdserverpb.KV/Range","request count":0,"request size":83,"response count":0,"response size":0,"request content":"key:\"/registry/prioritylevelconfigurations/\" range_end:\"/registry/prioritylevelconfigurations0\" limit:10000 "}
	2024/05/05 21:33:07 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-05-05T21:33:07.670876Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":1786397753024494759,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-05-05T21:33:08.172133Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":1786397753024494759,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-05-05T21:33:08.388153Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"a1efc654ffe9f445","rtt":"10.658942ms","error":"dial tcp 192.168.39.228:2380: i/o timeout"}
	{"level":"warn","ts":"2024-05-05T21:33:08.410412Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"a1efc654ffe9f445","rtt":"1.072438ms","error":"dial tcp 192.168.39.228:2380: i/o timeout"}
	{"level":"warn","ts":"2024-05-05T21:33:08.474231Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"8c95a24aec1a1ea5","rtt":"0s","error":"dial tcp 192.168.39.29:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-05-05T21:33:08.474253Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"8c95a24aec1a1ea5","rtt":"0s","error":"dial tcp 192.168.39.29:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-05-05T21:33:08.67264Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":1786397753024494759,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-05-05T21:33:08.708367Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dced536bf07718ca is starting a new election at term 3"}
	{"level":"info","ts":"2024-05-05T21:33:08.708459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dced536bf07718ca became pre-candidate at term 3"}
	{"level":"info","ts":"2024-05-05T21:33:08.708481Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dced536bf07718ca received MsgPreVoteResp from dced536bf07718ca at term 3"}
	{"level":"info","ts":"2024-05-05T21:33:08.708502Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dced536bf07718ca [logterm: 3, index: 3058] sent MsgPreVote request to 8c95a24aec1a1ea5 at term 3"}
	{"level":"info","ts":"2024-05-05T21:33:08.708516Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dced536bf07718ca [logterm: 3, index: 3058] sent MsgPreVote request to a1efc654ffe9f445 at term 3"}
	{"level":"warn","ts":"2024-05-05T21:33:09.173882Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":1786397753024494759,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-05-05T21:33:09.674834Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":1786397753024494759,"retry-timeout":"500ms"}
	
	
	==> etcd [97769959b22d6e891106c90e8ae981d40cb02ac960e23bdfb007b3c50d50c923] <==
	{"level":"info","ts":"2024-05-05T21:26:54.126418Z","caller":"traceutil/trace.go:171","msg":"trace[1422100536] range","detail":"{range_begin:/registry/priorityclasses/; range_end:/registry/priorityclasses0; }","duration":"439.354424ms","start":"2024-05-05T21:26:53.687056Z","end":"2024-05-05T21:26:54.126411Z","steps":["trace[1422100536] 'agreement among raft nodes before linearized reading'  (duration: 436.623479ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-05T21:26:54.126525Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-05T21:26:53.687052Z","time spent":"439.463134ms","remote":"127.0.0.1:43734","response type":"/etcdserverpb.KV/Range","request count":0,"request size":59,"response count":0,"response size":0,"request content":"key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" limit:10000 "}
	2024/05/05 21:26:54 WARNING: [core] [Server #5] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/05/05 21:26:54 WARNING: [core] [Server #5] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/05/05 21:26:54 WARNING: [core] [Server #5] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-05-05T21:26:54.176083Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.178:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-05T21:26:54.176142Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.178:2379: use of closed network connection"}
	{"level":"info","ts":"2024-05-05T21:26:54.176235Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"dced536bf07718ca","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-05-05T21:26:54.176415Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"a1efc654ffe9f445"}
	{"level":"info","ts":"2024-05-05T21:26:54.176459Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"a1efc654ffe9f445"}
	{"level":"info","ts":"2024-05-05T21:26:54.17649Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"a1efc654ffe9f445"}
	{"level":"info","ts":"2024-05-05T21:26:54.176585Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445"}
	{"level":"info","ts":"2024-05-05T21:26:54.176654Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445"}
	{"level":"info","ts":"2024-05-05T21:26:54.176819Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445"}
	{"level":"info","ts":"2024-05-05T21:26:54.17686Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"a1efc654ffe9f445"}
	{"level":"info","ts":"2024-05-05T21:26:54.176869Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"8c95a24aec1a1ea5"}
	{"level":"info","ts":"2024-05-05T21:26:54.176883Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"8c95a24aec1a1ea5"}
	{"level":"info","ts":"2024-05-05T21:26:54.176901Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"8c95a24aec1a1ea5"}
	{"level":"info","ts":"2024-05-05T21:26:54.177003Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"dced536bf07718ca","remote-peer-id":"8c95a24aec1a1ea5"}
	{"level":"info","ts":"2024-05-05T21:26:54.177072Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"dced536bf07718ca","remote-peer-id":"8c95a24aec1a1ea5"}
	{"level":"info","ts":"2024-05-05T21:26:54.177105Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"dced536bf07718ca","remote-peer-id":"8c95a24aec1a1ea5"}
	{"level":"info","ts":"2024-05-05T21:26:54.177115Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"8c95a24aec1a1ea5"}
	{"level":"info","ts":"2024-05-05T21:26:54.180402Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.178:2380"}
	{"level":"info","ts":"2024-05-05T21:26:54.180578Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.178:2380"}
	{"level":"info","ts":"2024-05-05T21:26:54.180616Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-322980","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.178:2380"],"advertise-client-urls":["https://192.168.39.178:2379"]}
	
	
	==> kernel <==
	 21:33:09 up 17 min,  0 users,  load average: 0.30, 0.74, 0.48
	Linux ha-322980 5.10.207 #1 SMP Tue Apr 30 22:38:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [355a3bf6a6f151e9f26eb348a08353e45581aa4cee3a2aa154b9e12ee27a638c] <==
	I0505 21:32:21.859857       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0505 21:32:21.859939       1 main.go:107] hostIP = 192.168.39.178
	podIP = 192.168.39.178
	I0505 21:32:21.860106       1 main.go:116] setting mtu 1500 for CNI 
	I0505 21:32:21.860127       1 main.go:146] kindnetd IP family: "ipv4"
	I0505 21:32:21.860150       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0505 21:32:32.182919       1 main.go:191] Failed to get nodes, retrying after error: Unauthorized
	I0505 21:32:46.182451       1 main.go:191] Failed to get nodes, retrying after error: Unauthorized
	I0505 21:33:00.185274       1 main.go:191] Failed to get nodes, retrying after error: Unauthorized
	I0505 21:33:04.870290       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0505 21:33:07.942325       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	
	
	==> kindnet [d8e5582057ffa7695636c3c49a29af71b47c3e0e49d6d4f28aed3cb84503b54e] <==
	I0505 21:29:54.483838       1 main.go:250] Node ha-322980-m04 has CIDR [10.244.3.0/24] 
	I0505 21:30:04.500852       1 main.go:223] Handling node with IPs: map[192.168.39.178:{}]
	I0505 21:30:04.500874       1 main.go:227] handling current node
	I0505 21:30:04.500883       1 main.go:223] Handling node with IPs: map[192.168.39.228:{}]
	I0505 21:30:04.500888       1 main.go:250] Node ha-322980-m02 has CIDR [10.244.1.0/24] 
	I0505 21:30:04.500984       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0505 21:30:04.500989       1 main.go:250] Node ha-322980-m03 has CIDR [10.244.2.0/24] 
	I0505 21:30:04.501029       1 main.go:223] Handling node with IPs: map[192.168.39.169:{}]
	I0505 21:30:04.501033       1 main.go:250] Node ha-322980-m04 has CIDR [10.244.3.0/24] 
	I0505 21:30:14.519327       1 main.go:223] Handling node with IPs: map[192.168.39.178:{}]
	I0505 21:30:14.519375       1 main.go:227] handling current node
	I0505 21:30:14.519393       1 main.go:223] Handling node with IPs: map[192.168.39.228:{}]
	I0505 21:30:14.519398       1 main.go:250] Node ha-322980-m02 has CIDR [10.244.1.0/24] 
	I0505 21:30:14.519561       1 main.go:223] Handling node with IPs: map[192.168.39.169:{}]
	I0505 21:30:14.519567       1 main.go:250] Node ha-322980-m04 has CIDR [10.244.3.0/24] 
	I0505 21:30:24.527940       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0505 21:30:24.530899       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0505 21:30:25.533920       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0505 21:30:40.160237       1 main.go:191] Failed to get nodes, retrying after error: etcdserver: request timed out
	I0505 21:30:54.157547       1 main.go:191] Failed to get nodes, retrying after error: etcdserver: request timed out
	panic: Reached maximum retries obtaining node list: etcdserver: request timed out
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kube-apiserver [94752b251c71a06932b73db8104e820e473d1d4494c78884ffe58ad4eb867d3b] <==
	F0505 21:33:07.187252       1 hooks.go:203] PostStartHook "rbac/bootstrap-roles" failed: unable to initialize roles: timed out waiting for the condition
	E0505 21:33:07.256984       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, etcdserver: request timed out]"
	E0505 21:33:07.187289       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, etcdserver: request timed out]"
	W0505 21:33:07.185462       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ServiceAccount: etcdserver: request timed out
	I0505 21:33:07.258329       1 trace.go:236] Trace[782034285]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (05-May-2024 21:32:56.460) (total time: 10797ms):
	Trace[782034285]: ---"Objects listed" error:etcdserver: request timed out 10724ms (21:33:07.185)
	Trace[782034285]: [10.797332223s] [10.797332223s] END
	E0505 21:33:07.258357       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, etcdserver: request timed out]"
	E0505 21:33:07.258370       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ServiceAccount: failed to list *v1.ServiceAccount: etcdserver: request timed out
	W0505 21:33:07.187256       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ValidatingAdmissionPolicy: etcdserver: request timed out
	W0505 21:33:07.186929       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ValidatingAdmissionPolicyBinding: etcdserver: request timed out
	W0505 21:33:07.185479       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: etcdserver: request timed out
	W0505 21:33:07.185500       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: etcdserver: request timed out
	W0505 21:33:07.185526       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: etcdserver: request timed out
	W0505 21:33:07.185547       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: etcdserver: request timed out
	W0505 21:33:07.185572       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ValidatingWebhookConfiguration: etcdserver: request timed out
	W0505 21:33:07.185590       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PriorityLevelConfiguration: etcdserver: request timed out
	W0505 21:33:07.185611       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Endpoints: etcdserver: request timed out
	W0505 21:33:07.185630       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ClusterRole: etcdserver: request timed out
	W0505 21:33:07.186959       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: etcdserver: request timed out. Retrying...
	W0505 21:33:07.187077       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: etcdserver: request timed out
	W0505 21:33:07.187099       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ClusterRoleBinding: etcdserver: request timed out
	W0505 21:33:07.187127       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.MutatingWebhookConfiguration: etcdserver: request timed out
	W0505 21:33:07.187148       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: etcdserver: request timed out
	W0505 21:33:07.187278       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ResourceQuota: etcdserver: request timed out
	
	
	==> kube-controller-manager [06be80792a085f9dea48d4219f306149664769dd55a7c6bc24d984254df5fc7d] <==
	I0505 21:28:34.536857       1 serving.go:380] Generated self-signed cert in-memory
	I0505 21:28:34.963263       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0505 21:28:34.963366       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0505 21:28:34.965468       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0505 21:28:34.965623       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0505 21:28:34.966267       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0505 21:28:34.966207       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	E0505 21:28:55.190151       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.178:8443/healthz\": dial tcp 192.168.39.178:8443: connect: connection refused"
	
	
	==> kube-controller-manager [b48ee84cd3ceb82bcfda671b85eb4b1b2793a18718f5c445445feaf19173b3d9] <==
	W0505 21:33:03.247931       1 client_builder_dynamic.go:197] get or create service account failed: serviceaccounts "node-controller" is forbidden: User "system:kube-controller-manager" cannot get resource "serviceaccounts" in API group "" in the namespace "kube-system"
	W0505 21:33:03.750303       1 client_builder_dynamic.go:197] get or create service account failed: serviceaccounts "node-controller" is forbidden: User "system:kube-controller-manager" cannot get resource "serviceaccounts" in API group "" in the namespace "kube-system"
	W0505 21:33:04.752645       1 client_builder_dynamic.go:197] get or create service account failed: serviceaccounts "node-controller" is forbidden: User "system:kube-controller-manager" cannot get resource "serviceaccounts" in API group "" in the namespace "kube-system"
	W0505 21:33:06.754852       1 client_builder_dynamic.go:197] get or create service account failed: serviceaccounts "node-controller" is forbidden: User "system:kube-controller-manager" cannot get resource "serviceaccounts" in API group "" in the namespace "kube-system"
	E0505 21:33:06.754986       1 node_lifecycle_controller.go:973] "Error updating node" err="Put \"https://192.168.39.178:8443/api/v1/nodes/ha-322980-m02/status\": failed to get token for kube-system/node-controller: timed out waiting for the condition" logger="node-lifecycle-controller" node="ha-322980-m02"
	W0505 21:33:06.756056       1 client_builder_dynamic.go:197] get or create service account failed: serviceaccounts "node-controller" is forbidden: User "system:kube-controller-manager" cannot get resource "serviceaccounts" in API group "" in the namespace "kube-system"
	W0505 21:33:06.856734       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ValidatingWebhookConfiguration: validatingwebhookconfigurations.admissionregistration.k8s.io is forbidden: User "system:kube-controller-manager" cannot list resource "validatingwebhookconfigurations" in API group "admissionregistration.k8s.io" at the cluster scope
	E0505 21:33:06.856939       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ValidatingWebhookConfiguration: failed to list *v1.ValidatingWebhookConfiguration: validatingwebhookconfigurations.admissionregistration.k8s.io is forbidden: User "system:kube-controller-manager" cannot list resource "validatingwebhookconfigurations" in API group "admissionregistration.k8s.io" at the cluster scope
	E0505 21:33:07.671629       1 gc_controller.go:153] "Failed to get node" err="node \"ha-322980-m03\" not found" logger="pod-garbage-collector-controller" node="ha-322980-m03"
	E0505 21:33:07.671726       1 gc_controller.go:153] "Failed to get node" err="node \"ha-322980-m03\" not found" logger="pod-garbage-collector-controller" node="ha-322980-m03"
	E0505 21:33:07.671826       1 gc_controller.go:153] "Failed to get node" err="node \"ha-322980-m03\" not found" logger="pod-garbage-collector-controller" node="ha-322980-m03"
	E0505 21:33:07.671868       1 gc_controller.go:153] "Failed to get node" err="node \"ha-322980-m03\" not found" logger="pod-garbage-collector-controller" node="ha-322980-m03"
	E0505 21:33:07.671914       1 gc_controller.go:153] "Failed to get node" err="node \"ha-322980-m03\" not found" logger="pod-garbage-collector-controller" node="ha-322980-m03"
	E0505 21:33:07.671945       1 gc_controller.go:153] "Failed to get node" err="node \"ha-322980-m03\" not found" logger="pod-garbage-collector-controller" node="ha-322980-m03"
	W0505 21:33:07.672968       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.178:8443/api/v1/namespaces/kube-system/serviceaccounts/pod-garbage-collector": dial tcp 192.168.39.178:8443: connect: connection refused
	W0505 21:33:08.173934       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.178:8443/api/v1/namespaces/kube-system/serviceaccounts/pod-garbage-collector": dial tcp 192.168.39.178:8443: connect: connection refused
	W0505 21:33:08.276074       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.178:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.178:8443: connect: connection refused - error from a previous attempt: unexpected EOF
	W0505 21:33:09.174944       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.178:8443/api/v1/namespaces/kube-system/serviceaccounts/generic-garbage-collector": dial tcp 192.168.39.178:8443: connect: connection refused
	W0505 21:33:09.175038       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.178:8443/api/v1/namespaces/kube-system/serviceaccounts/pod-garbage-collector": dial tcp 192.168.39.178:8443: connect: connection refused
	W0505 21:33:09.184443       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.178:8443/api/v1/namespaces/kube-system/serviceaccounts/resourcequota-controller": dial tcp 192.168.39.178:8443: connect: connection refused
	W0505 21:33:09.277500       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.178:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.178:8443: connect: connection refused
	W0505 21:33:09.675697       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.178:8443/api/v1/namespaces/kube-system/serviceaccounts/generic-garbage-collector": dial tcp 192.168.39.178:8443: connect: connection refused
	W0505 21:33:09.685943       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.178:8443/api/v1/namespaces/kube-system/serviceaccounts/resourcequota-controller": dial tcp 192.168.39.178:8443: connect: connection refused
	W0505 21:33:09.995382       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.178:8443/api/v1/persistentvolumes?resourceVersion=2603": dial tcp 192.168.39.178:8443: connect: connection refused
	E0505 21:33:09.995527       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.178:8443/api/v1/persistentvolumes?resourceVersion=2603": dial tcp 192.168.39.178:8443: connect: connection refused
	
	
	==> kube-proxy [4da23c67204614119fd12bb11bb2dcc384609fe399a32f06e2ca61ff52a8438c] <==
	E0505 21:25:37.638259       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-322980&resourceVersion=2067": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:25:37.638443       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2042": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:25:37.638505       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2042": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:25:40.774308       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2089": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:25:40.774433       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2089": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:25:43.846169       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2042": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:25:43.846281       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2042": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:25:43.846378       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-322980&resourceVersion=2067": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:25:43.846415       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-322980&resourceVersion=2067": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:25:49.288357       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2089": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:25:49.288477       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2089": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:25:52.359464       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-322980&resourceVersion=2067": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:25:52.359604       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-322980&resourceVersion=2067": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:25:52.359850       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2042": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:25:52.360135       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2042": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:26:01.575624       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2089": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:26:01.575831       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2089": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:26:07.718256       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-322980&resourceVersion=2067": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:26:07.718308       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-322980&resourceVersion=2067": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:26:16.935169       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2042": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:26:16.935384       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2042": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:26:26.151522       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2089": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:26:26.151611       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2089": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:26:50.729138       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2042": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:26:50.729278       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2042": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [852f56752c6434b6c374dd0257366bf739e210aee07f80277d63776fd528299b] <==
	W0505 21:31:15.944079       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-322980&resourceVersion=2602": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:31:15.944094       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2604": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:31:15.944189       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-322980&resourceVersion=2602": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:31:25.158902       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-322980&resourceVersion=2602": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:31:25.158993       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-322980&resourceVersion=2602": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:31:25.159118       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2604": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:31:25.159183       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2604": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:31:28.230693       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2604": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:31:28.230870       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2604": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:31:43.591408       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2604": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:31:43.591709       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2604": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:31:43.592238       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2604": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:31:43.592473       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2604": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:31:46.663488       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-322980&resourceVersion=2602": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:31:46.663604       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-322980&resourceVersion=2602": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:32:11.245050       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2604": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:32:11.249052       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2604": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:32:11.248957       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2604": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:32:11.253949       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2604": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:32:17.383674       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-322980&resourceVersion=2602": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:32:17.383888       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-322980&resourceVersion=2602": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:32:45.031724       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2604": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:32:45.032108       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2604": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:33:00.390486       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2604": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:33:00.390704       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2604": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [d73ef383ce1ab83dd2b2ada78891a4aa3835de2b4805157ac3e15584d0b4b29b] <==
	W0505 21:26:48.759656       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0505 21:26:48.759854       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0505 21:26:49.007969       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0505 21:26:49.008025       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0505 21:26:49.192688       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0505 21:26:49.192899       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0505 21:26:49.499913       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0505 21:26:49.500129       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0505 21:26:49.690540       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0505 21:26:49.690638       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0505 21:26:49.803920       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0505 21:26:49.804019       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0505 21:26:49.837414       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0505 21:26:49.837447       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0505 21:26:50.166566       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0505 21:26:50.166673       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0505 21:26:50.189955       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0505 21:26:50.190063       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0505 21:26:50.388221       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0505 21:26:50.388612       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0505 21:26:51.048541       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0505 21:26:51.048638       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0505 21:26:51.142262       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0505 21:26:51.142364       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0505 21:26:54.094710       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d864b4fda0bb945209c47a2404e8a012f5dac000a178c964de8c1e8cc8cb9a9f] <==
	E0505 21:32:43.984609       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0505 21:32:44.527439       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0505 21:32:44.527519       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0505 21:32:44.577580       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0505 21:32:44.577681       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0505 21:32:44.983096       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0505 21:32:44.983210       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0505 21:32:46.708263       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0505 21:32:46.708331       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0505 21:32:46.842131       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0505 21:32:46.842260       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0505 21:32:47.195060       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0505 21:32:47.195248       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0505 21:32:47.357428       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0505 21:32:47.357536       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0505 21:32:49.280307       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0505 21:32:49.280376       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0505 21:32:50.253345       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0505 21:32:50.253408       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0505 21:32:55.016954       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0505 21:32:55.017115       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0505 21:32:56.239220       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0505 21:32:56.239331       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0505 21:32:56.628243       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0505 21:32:56.628317       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	
	
	==> kubelet <==
	May 05 21:32:57 ha-322980 kubelet[1385]: E0505 21:32:57.319294    1385 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)kube-root-ca.crt&resourceVersion=2484": dial tcp 192.168.39.254:8443: connect: no route to host
	May 05 21:32:57 ha-322980 kubelet[1385]: W0505 21:32:57.319420    1385 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-322980&resourceVersion=2416": dial tcp 192.168.39.254:8443: connect: no route to host
	May 05 21:32:57 ha-322980 kubelet[1385]: E0505 21:32:57.319502    1385 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-322980&resourceVersion=2416": dial tcp 192.168.39.254:8443: connect: no route to host
	May 05 21:32:57 ha-322980 kubelet[1385]: E0505 21:32:57.319633    1385 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-322980\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-322980?resourceVersion=0&timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	May 05 21:32:57 ha-322980 kubelet[1385]: E0505 21:32:57.319630    1385 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-322980?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host" interval="7s"
	May 05 21:33:00 ha-322980 kubelet[1385]: W0505 21:33:00.391240    1385 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?resourceVersion=2488": dial tcp 192.168.39.254:8443: connect: no route to host
	May 05 21:33:00 ha-322980 kubelet[1385]: E0505 21:33:00.391363    1385 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?resourceVersion=2488": dial tcp 192.168.39.254:8443: connect: no route to host
	May 05 21:33:00 ha-322980 kubelet[1385]: I0505 21:33:00.391449    1385 status_manager.go:853] "Failed to get status for pod" podUID="25cdcec1c37ba86157b0b42297dfe2cf" pod="kube-system/kube-apiserver-ha-322980" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-322980\": dial tcp 192.168.39.254:8443: connect: no route to host"
	May 05 21:33:00 ha-322980 kubelet[1385]: E0505 21:33:00.391928    1385 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-322980\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-322980?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	May 05 21:33:03 ha-322980 kubelet[1385]: E0505 21:33:03.462388    1385 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-322980\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-322980?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	May 05 21:33:03 ha-322980 kubelet[1385]: E0505 21:33:03.462379    1385 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events/kube-apiserver-ha-322980.17ccb4c0ecdf860d\": dial tcp 192.168.39.254:8443: connect: no route to host" event="&Event{ObjectMeta:{kube-apiserver-ha-322980.17ccb4c0ecdf860d  kube-system   2174 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ha-322980,UID:25cdcec1c37ba86157b0b42297dfe2cf,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ha-322980,},FirstTimestamp:2024-05-05 21:24:58 +0000 UTC,LastTimestamp:2024-05-05 21:30:24.823467443 +0000 UTC m=+850.595181527,Count:26,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Seri
es:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-322980,}"
	May 05 21:33:03 ha-322980 kubelet[1385]: I0505 21:33:03.462603    1385 status_manager.go:853] "Failed to get status for pod" podUID="4033535e-69f1-426c-bb17-831fad6336d5" pod="kube-system/kindnet-lwtnx" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kindnet-lwtnx\": dial tcp 192.168.39.254:8443: connect: no route to host"
	May 05 21:33:06 ha-322980 kubelet[1385]: I0505 21:33:06.534459    1385 status_manager.go:853] "Failed to get status for pod" podUID="b4b10859196db0958fa2b1c992ad5e8a" pod="kube-system/kube-vip-ha-322980" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-vip-ha-322980\": dial tcp 192.168.39.254:8443: connect: no route to host"
	May 05 21:33:06 ha-322980 kubelet[1385]: E0505 21:33:06.534863    1385 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-322980?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host" interval="7s"
	May 05 21:33:06 ha-322980 kubelet[1385]: E0505 21:33:06.535363    1385 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-322980\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-322980?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	May 05 21:33:08 ha-322980 kubelet[1385]: I0505 21:33:08.104153    1385 scope.go:117] "RemoveContainer" containerID="a6a90eca6999f3420eb8cc58e5af0b595de2c1bbf36f04f08d7edea98e8cef1d"
	May 05 21:33:08 ha-322980 kubelet[1385]: I0505 21:33:08.104528    1385 scope.go:117] "RemoveContainer" containerID="94752b251c71a06932b73db8104e820e473d1d4494c78884ffe58ad4eb867d3b"
	May 05 21:33:08 ha-322980 kubelet[1385]: E0505 21:33:08.105053    1385 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-ha-322980_kube-system(25cdcec1c37ba86157b0b42297dfe2cf)\"" pod="kube-system/kube-apiserver-ha-322980" podUID="25cdcec1c37ba86157b0b42297dfe2cf"
	May 05 21:33:09 ha-322980 kubelet[1385]: W0505 21:33:09.607285    1385 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)kube-proxy&resourceVersion=2602": dial tcp 192.168.39.254:8443: connect: no route to host
	May 05 21:33:09 ha-322980 kubelet[1385]: E0505 21:33:09.607410    1385 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)kube-proxy&resourceVersion=2602": dial tcp 192.168.39.254:8443: connect: no route to host
	May 05 21:33:09 ha-322980 kubelet[1385]: W0505 21:33:09.607470    1385 reflector.go:547] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/configmaps?fieldSelector=metadata.name%!D(MISSING)kube-root-ca.crt&resourceVersion=2475": dial tcp 192.168.39.254:8443: connect: no route to host
	May 05 21:33:09 ha-322980 kubelet[1385]: E0505 21:33:09.607535    1385 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/configmaps?fieldSelector=metadata.name%!D(MISSING)kube-root-ca.crt&resourceVersion=2475": dial tcp 192.168.39.254:8443: connect: no route to host
	May 05 21:33:09 ha-322980 kubelet[1385]: I0505 21:33:09.607499    1385 status_manager.go:853] "Failed to get status for pod" podUID="b4b10859196db0958fa2b1c992ad5e8a" pod="kube-system/kube-vip-ha-322980" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-vip-ha-322980\": dial tcp 192.168.39.254:8443: connect: no route to host"
	May 05 21:33:09 ha-322980 kubelet[1385]: E0505 21:33:09.607565    1385 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-322980\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-322980?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	May 05 21:33:09 ha-322980 kubelet[1385]: E0505 21:33:09.607577    1385 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0505 21:33:08.946571   38543 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18602-11466/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-322980 -n ha-322980
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-322980 -n ha-322980: exit status 2 (245.241144ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-322980" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (173.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (314.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-322980 --control-plane -v=7 --alsologtostderr
E0505 21:39:31.828977   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/functional-273789/client.crt: no such file or directory
E0505 21:41:51.947563   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/client.crt: no such file or directory
ha_test.go:605: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p ha-322980 --control-plane -v=7 --alsologtostderr: exit status 80 (5m11.571398317s)

                                                
                                                
-- stdout --
	* Adding node m05 to cluster ha-322980 as [worker control-plane]
	* Starting "ha-322980-m05" control-plane node in "ha-322980" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 21:39:00.071194   40254 out.go:291] Setting OutFile to fd 1 ...
	I0505 21:39:00.071332   40254 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 21:39:00.071345   40254 out.go:304] Setting ErrFile to fd 2...
	I0505 21:39:00.071351   40254 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 21:39:00.071640   40254 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18602-11466/.minikube/bin
	I0505 21:39:00.071947   40254 mustload.go:65] Loading cluster: ha-322980
	I0505 21:39:00.072324   40254 config.go:182] Loaded profile config "ha-322980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 21:39:00.072684   40254 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:39:00.072732   40254 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:39:00.087138   40254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35277
	I0505 21:39:00.087582   40254 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:39:00.088209   40254 main.go:141] libmachine: Using API Version  1
	I0505 21:39:00.088231   40254 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:39:00.088525   40254 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:39:00.088731   40254 main.go:141] libmachine: (ha-322980) Calling .GetState
	I0505 21:39:00.090281   40254 host.go:66] Checking if "ha-322980" exists ...
	I0505 21:39:00.090562   40254 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:39:00.090595   40254 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:39:00.104417   40254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37659
	I0505 21:39:00.104840   40254 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:39:00.105298   40254 main.go:141] libmachine: Using API Version  1
	I0505 21:39:00.105333   40254 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:39:00.105656   40254 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:39:00.105841   40254 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:39:00.106262   40254 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:39:00.106297   40254 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:39:00.120008   40254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38855
	I0505 21:39:00.120378   40254 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:39:00.120848   40254 main.go:141] libmachine: Using API Version  1
	I0505 21:39:00.120874   40254 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:39:00.121192   40254 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:39:00.121366   40254 main.go:141] libmachine: (ha-322980-m02) Calling .GetState
	I0505 21:39:00.122828   40254 host.go:66] Checking if "ha-322980-m02" exists ...
	I0505 21:39:00.123111   40254 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:39:00.123142   40254 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:39:00.136623   40254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40155
	I0505 21:39:00.137039   40254 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:39:00.137512   40254 main.go:141] libmachine: Using API Version  1
	I0505 21:39:00.137532   40254 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:39:00.137876   40254 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:39:00.138066   40254 main.go:141] libmachine: (ha-322980-m02) Calling .DriverName
	I0505 21:39:00.138257   40254 api_server.go:166] Checking apiserver status ...
	I0505 21:39:00.138324   40254 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 21:39:00.138361   40254 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:39:00.140752   40254 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:39:00.141132   40254 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:39:00.141163   40254 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:39:00.141267   40254 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:39:00.141439   40254 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:39:00.141603   40254 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:39:00.141759   40254 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa Username:docker}
	I0505 21:39:00.237403   40254 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/8620/cgroup
	W0505 21:39:00.248227   40254 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/8620/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0505 21:39:00.248279   40254 ssh_runner.go:195] Run: ls
	I0505 21:39:00.254179   40254 api_server.go:253] Checking apiserver healthz at https://192.168.39.178:8443/healthz ...
	I0505 21:39:00.258885   40254 api_server.go:279] https://192.168.39.178:8443/healthz returned 200:
	ok
	I0505 21:39:00.261079   40254 out.go:177] * Adding node m05 to cluster ha-322980 as [worker control-plane]
	I0505 21:39:00.262734   40254 config.go:182] Loaded profile config "ha-322980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 21:39:00.262861   40254 profile.go:143] Saving config to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/config.json ...
	I0505 21:39:00.264877   40254 out.go:177] * Starting "ha-322980-m05" control-plane node in "ha-322980" cluster
	I0505 21:39:00.266438   40254 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0505 21:39:00.266477   40254 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18602-11466/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0505 21:39:00.266499   40254 cache.go:56] Caching tarball of preloaded images
	I0505 21:39:00.266601   40254 preload.go:173] Found /home/jenkins/minikube-integration/18602-11466/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0505 21:39:00.266616   40254 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0505 21:39:00.266700   40254 profile.go:143] Saving config to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/config.json ...
	I0505 21:39:00.266860   40254 start.go:360] acquireMachinesLock for ha-322980-m05: {Name:mk7f653ea73351d572a4896c6b37d2e7aa44b4ac Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 21:39:00.266939   40254 start.go:364] duration metric: took 29.005µs to acquireMachinesLock for "ha-322980-m05"
	I0505 21:39:00.266965   40254 start.go:93] Provisioning new machine with config: &{Name:ha-322980 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:ha-322980 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.178 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.169 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m05 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false f
reshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L M
ountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m05 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:true Worker:true}
	I0505 21:39:00.267097   40254 start.go:125] createHost starting for "m05" (driver="kvm2")
	I0505 21:39:00.268674   40254 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0505 21:39:00.268802   40254 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:39:00.268840   40254 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:39:00.285600   40254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39007
	I0505 21:39:00.286068   40254 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:39:00.286613   40254 main.go:141] libmachine: Using API Version  1
	I0505 21:39:00.286637   40254 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:39:00.286966   40254 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:39:00.287149   40254 main.go:141] libmachine: (ha-322980-m05) Calling .GetMachineName
	I0505 21:39:00.287331   40254 main.go:141] libmachine: (ha-322980-m05) Calling .DriverName
	I0505 21:39:00.287533   40254 start.go:159] libmachine.API.Create for "ha-322980" (driver="kvm2")
	I0505 21:39:00.287568   40254 client.go:168] LocalClient.Create starting
	I0505 21:39:00.287605   40254 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem
	I0505 21:39:00.287646   40254 main.go:141] libmachine: Decoding PEM data...
	I0505 21:39:00.287668   40254 main.go:141] libmachine: Parsing certificate...
	I0505 21:39:00.287737   40254 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem
	I0505 21:39:00.287763   40254 main.go:141] libmachine: Decoding PEM data...
	I0505 21:39:00.287781   40254 main.go:141] libmachine: Parsing certificate...
	I0505 21:39:00.287814   40254 main.go:141] libmachine: Running pre-create checks...
	I0505 21:39:00.287825   40254 main.go:141] libmachine: (ha-322980-m05) Calling .PreCreateCheck
	I0505 21:39:00.288028   40254 main.go:141] libmachine: (ha-322980-m05) Calling .GetConfigRaw
	I0505 21:39:00.288426   40254 main.go:141] libmachine: Creating machine...
	I0505 21:39:00.288444   40254 main.go:141] libmachine: (ha-322980-m05) Calling .Create
	I0505 21:39:00.288573   40254 main.go:141] libmachine: (ha-322980-m05) Creating KVM machine...
	I0505 21:39:00.289898   40254 main.go:141] libmachine: (ha-322980-m05) DBG | found existing default KVM network
	I0505 21:39:00.290051   40254 main.go:141] libmachine: (ha-322980-m05) DBG | found existing private KVM network mk-ha-322980
	I0505 21:39:00.290168   40254 main.go:141] libmachine: (ha-322980-m05) Setting up store path in /home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m05 ...
	I0505 21:39:00.290207   40254 main.go:141] libmachine: (ha-322980-m05) Building disk image from file:///home/jenkins/minikube-integration/18602-11466/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso
	I0505 21:39:00.290319   40254 main.go:141] libmachine: (ha-322980-m05) DBG | I0505 21:39:00.290160   40291 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18602-11466/.minikube
	I0505 21:39:00.290376   40254 main.go:141] libmachine: (ha-322980-m05) Downloading /home/jenkins/minikube-integration/18602-11466/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18602-11466/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso...
	I0505 21:39:00.503024   40254 main.go:141] libmachine: (ha-322980-m05) DBG | I0505 21:39:00.502892   40291 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m05/id_rsa...
	I0505 21:39:00.635683   40254 main.go:141] libmachine: (ha-322980-m05) DBG | I0505 21:39:00.635561   40291 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m05/ha-322980-m05.rawdisk...
	I0505 21:39:00.635719   40254 main.go:141] libmachine: (ha-322980-m05) DBG | Writing magic tar header
	I0505 21:39:00.635734   40254 main.go:141] libmachine: (ha-322980-m05) DBG | Writing SSH key tar header
	I0505 21:39:00.635747   40254 main.go:141] libmachine: (ha-322980-m05) DBG | I0505 21:39:00.635674   40291 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m05 ...
	I0505 21:39:00.635765   40254 main.go:141] libmachine: (ha-322980-m05) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m05
	I0505 21:39:00.635840   40254 main.go:141] libmachine: (ha-322980-m05) Setting executable bit set on /home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m05 (perms=drwx------)
	I0505 21:39:00.635869   40254 main.go:141] libmachine: (ha-322980-m05) Setting executable bit set on /home/jenkins/minikube-integration/18602-11466/.minikube/machines (perms=drwxr-xr-x)
	I0505 21:39:00.635881   40254 main.go:141] libmachine: (ha-322980-m05) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18602-11466/.minikube/machines
	I0505 21:39:00.635901   40254 main.go:141] libmachine: (ha-322980-m05) Setting executable bit set on /home/jenkins/minikube-integration/18602-11466/.minikube (perms=drwxr-xr-x)
	I0505 21:39:00.635918   40254 main.go:141] libmachine: (ha-322980-m05) Setting executable bit set on /home/jenkins/minikube-integration/18602-11466 (perms=drwxrwxr-x)
	I0505 21:39:00.635931   40254 main.go:141] libmachine: (ha-322980-m05) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18602-11466/.minikube
	I0505 21:39:00.635947   40254 main.go:141] libmachine: (ha-322980-m05) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18602-11466
	I0505 21:39:00.635959   40254 main.go:141] libmachine: (ha-322980-m05) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0505 21:39:00.635977   40254 main.go:141] libmachine: (ha-322980-m05) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0505 21:39:00.635998   40254 main.go:141] libmachine: (ha-322980-m05) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0505 21:39:00.636008   40254 main.go:141] libmachine: (ha-322980-m05) DBG | Checking permissions on dir: /home/jenkins
	I0505 21:39:00.636023   40254 main.go:141] libmachine: (ha-322980-m05) DBG | Checking permissions on dir: /home
	I0505 21:39:00.636034   40254 main.go:141] libmachine: (ha-322980-m05) DBG | Skipping /home - not owner
	I0505 21:39:00.636049   40254 main.go:141] libmachine: (ha-322980-m05) Creating domain...
	I0505 21:39:00.637062   40254 main.go:141] libmachine: (ha-322980-m05) define libvirt domain using xml: 
	I0505 21:39:00.637086   40254 main.go:141] libmachine: (ha-322980-m05) <domain type='kvm'>
	I0505 21:39:00.637097   40254 main.go:141] libmachine: (ha-322980-m05)   <name>ha-322980-m05</name>
	I0505 21:39:00.637105   40254 main.go:141] libmachine: (ha-322980-m05)   <memory unit='MiB'>2200</memory>
	I0505 21:39:00.637119   40254 main.go:141] libmachine: (ha-322980-m05)   <vcpu>2</vcpu>
	I0505 21:39:00.637127   40254 main.go:141] libmachine: (ha-322980-m05)   <features>
	I0505 21:39:00.637139   40254 main.go:141] libmachine: (ha-322980-m05)     <acpi/>
	I0505 21:39:00.637147   40254 main.go:141] libmachine: (ha-322980-m05)     <apic/>
	I0505 21:39:00.637156   40254 main.go:141] libmachine: (ha-322980-m05)     <pae/>
	I0505 21:39:00.637169   40254 main.go:141] libmachine: (ha-322980-m05)     
	I0505 21:39:00.637179   40254 main.go:141] libmachine: (ha-322980-m05)   </features>
	I0505 21:39:00.637196   40254 main.go:141] libmachine: (ha-322980-m05)   <cpu mode='host-passthrough'>
	I0505 21:39:00.637208   40254 main.go:141] libmachine: (ha-322980-m05)   
	I0505 21:39:00.637220   40254 main.go:141] libmachine: (ha-322980-m05)   </cpu>
	I0505 21:39:00.637232   40254 main.go:141] libmachine: (ha-322980-m05)   <os>
	I0505 21:39:00.637243   40254 main.go:141] libmachine: (ha-322980-m05)     <type>hvm</type>
	I0505 21:39:00.637255   40254 main.go:141] libmachine: (ha-322980-m05)     <boot dev='cdrom'/>
	I0505 21:39:00.637276   40254 main.go:141] libmachine: (ha-322980-m05)     <boot dev='hd'/>
	I0505 21:39:00.637305   40254 main.go:141] libmachine: (ha-322980-m05)     <bootmenu enable='no'/>
	I0505 21:39:00.637318   40254 main.go:141] libmachine: (ha-322980-m05)   </os>
	I0505 21:39:00.637330   40254 main.go:141] libmachine: (ha-322980-m05)   <devices>
	I0505 21:39:00.637353   40254 main.go:141] libmachine: (ha-322980-m05)     <disk type='file' device='cdrom'>
	I0505 21:39:00.637365   40254 main.go:141] libmachine: (ha-322980-m05)       <source file='/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m05/boot2docker.iso'/>
	I0505 21:39:00.637374   40254 main.go:141] libmachine: (ha-322980-m05)       <target dev='hdc' bus='scsi'/>
	I0505 21:39:00.637378   40254 main.go:141] libmachine: (ha-322980-m05)       <readonly/>
	I0505 21:39:00.637384   40254 main.go:141] libmachine: (ha-322980-m05)     </disk>
	I0505 21:39:00.637398   40254 main.go:141] libmachine: (ha-322980-m05)     <disk type='file' device='disk'>
	I0505 21:39:00.637424   40254 main.go:141] libmachine: (ha-322980-m05)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0505 21:39:00.637444   40254 main.go:141] libmachine: (ha-322980-m05)       <source file='/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m05/ha-322980-m05.rawdisk'/>
	I0505 21:39:00.637454   40254 main.go:141] libmachine: (ha-322980-m05)       <target dev='hda' bus='virtio'/>
	I0505 21:39:00.637459   40254 main.go:141] libmachine: (ha-322980-m05)     </disk>
	I0505 21:39:00.637468   40254 main.go:141] libmachine: (ha-322980-m05)     <interface type='network'>
	I0505 21:39:00.637473   40254 main.go:141] libmachine: (ha-322980-m05)       <source network='mk-ha-322980'/>
	I0505 21:39:00.637481   40254 main.go:141] libmachine: (ha-322980-m05)       <model type='virtio'/>
	I0505 21:39:00.637485   40254 main.go:141] libmachine: (ha-322980-m05)     </interface>
	I0505 21:39:00.637493   40254 main.go:141] libmachine: (ha-322980-m05)     <interface type='network'>
	I0505 21:39:00.637498   40254 main.go:141] libmachine: (ha-322980-m05)       <source network='default'/>
	I0505 21:39:00.637505   40254 main.go:141] libmachine: (ha-322980-m05)       <model type='virtio'/>
	I0505 21:39:00.637515   40254 main.go:141] libmachine: (ha-322980-m05)     </interface>
	I0505 21:39:00.637523   40254 main.go:141] libmachine: (ha-322980-m05)     <serial type='pty'>
	I0505 21:39:00.637528   40254 main.go:141] libmachine: (ha-322980-m05)       <target port='0'/>
	I0505 21:39:00.637535   40254 main.go:141] libmachine: (ha-322980-m05)     </serial>
	I0505 21:39:00.637540   40254 main.go:141] libmachine: (ha-322980-m05)     <console type='pty'>
	I0505 21:39:00.637579   40254 main.go:141] libmachine: (ha-322980-m05)       <target type='serial' port='0'/>
	I0505 21:39:00.637596   40254 main.go:141] libmachine: (ha-322980-m05)     </console>
	I0505 21:39:00.637610   40254 main.go:141] libmachine: (ha-322980-m05)     <rng model='virtio'>
	I0505 21:39:00.637623   40254 main.go:141] libmachine: (ha-322980-m05)       <backend model='random'>/dev/random</backend>
	I0505 21:39:00.637638   40254 main.go:141] libmachine: (ha-322980-m05)     </rng>
	I0505 21:39:00.637653   40254 main.go:141] libmachine: (ha-322980-m05)     
	I0505 21:39:00.637664   40254 main.go:141] libmachine: (ha-322980-m05)     
	I0505 21:39:00.637672   40254 main.go:141] libmachine: (ha-322980-m05)   </devices>
	I0505 21:39:00.637685   40254 main.go:141] libmachine: (ha-322980-m05) </domain>
	I0505 21:39:00.637706   40254 main.go:141] libmachine: (ha-322980-m05) 
	I0505 21:39:00.644422   40254 main.go:141] libmachine: (ha-322980-m05) DBG | domain ha-322980-m05 has defined MAC address 52:54:00:53:e5:5c in network default
	I0505 21:39:00.645035   40254 main.go:141] libmachine: (ha-322980-m05) Ensuring networks are active...
	I0505 21:39:00.645055   40254 main.go:141] libmachine: (ha-322980-m05) DBG | domain ha-322980-m05 has defined MAC address 52:54:00:a2:73:06 in network mk-ha-322980
	I0505 21:39:00.645903   40254 main.go:141] libmachine: (ha-322980-m05) Ensuring network default is active
	I0505 21:39:00.646213   40254 main.go:141] libmachine: (ha-322980-m05) Ensuring network mk-ha-322980 is active
	I0505 21:39:00.646633   40254 main.go:141] libmachine: (ha-322980-m05) Getting domain xml...
	I0505 21:39:00.647463   40254 main.go:141] libmachine: (ha-322980-m05) Creating domain...
	I0505 21:39:01.974100   40254 main.go:141] libmachine: (ha-322980-m05) Waiting to get IP...
	I0505 21:39:01.974883   40254 main.go:141] libmachine: (ha-322980-m05) DBG | domain ha-322980-m05 has defined MAC address 52:54:00:a2:73:06 in network mk-ha-322980
	I0505 21:39:01.975404   40254 main.go:141] libmachine: (ha-322980-m05) DBG | unable to find current IP address of domain ha-322980-m05 in network mk-ha-322980
	I0505 21:39:01.975430   40254 main.go:141] libmachine: (ha-322980-m05) DBG | I0505 21:39:01.975383   40291 retry.go:31] will retry after 303.53142ms: waiting for machine to come up
	I0505 21:39:02.280937   40254 main.go:141] libmachine: (ha-322980-m05) DBG | domain ha-322980-m05 has defined MAC address 52:54:00:a2:73:06 in network mk-ha-322980
	I0505 21:39:02.281562   40254 main.go:141] libmachine: (ha-322980-m05) DBG | unable to find current IP address of domain ha-322980-m05 in network mk-ha-322980
	I0505 21:39:02.281590   40254 main.go:141] libmachine: (ha-322980-m05) DBG | I0505 21:39:02.281511   40291 retry.go:31] will retry after 301.390499ms: waiting for machine to come up
	I0505 21:39:02.584989   40254 main.go:141] libmachine: (ha-322980-m05) DBG | domain ha-322980-m05 has defined MAC address 52:54:00:a2:73:06 in network mk-ha-322980
	I0505 21:39:02.585431   40254 main.go:141] libmachine: (ha-322980-m05) DBG | unable to find current IP address of domain ha-322980-m05 in network mk-ha-322980
	I0505 21:39:02.585459   40254 main.go:141] libmachine: (ha-322980-m05) DBG | I0505 21:39:02.585380   40291 retry.go:31] will retry after 463.22796ms: waiting for machine to come up
	I0505 21:39:03.049781   40254 main.go:141] libmachine: (ha-322980-m05) DBG | domain ha-322980-m05 has defined MAC address 52:54:00:a2:73:06 in network mk-ha-322980
	I0505 21:39:03.050177   40254 main.go:141] libmachine: (ha-322980-m05) DBG | unable to find current IP address of domain ha-322980-m05 in network mk-ha-322980
	I0505 21:39:03.050202   40254 main.go:141] libmachine: (ha-322980-m05) DBG | I0505 21:39:03.050147   40291 retry.go:31] will retry after 493.503622ms: waiting for machine to come up
	I0505 21:39:03.545767   40254 main.go:141] libmachine: (ha-322980-m05) DBG | domain ha-322980-m05 has defined MAC address 52:54:00:a2:73:06 in network mk-ha-322980
	I0505 21:39:03.546261   40254 main.go:141] libmachine: (ha-322980-m05) DBG | unable to find current IP address of domain ha-322980-m05 in network mk-ha-322980
	I0505 21:39:03.546291   40254 main.go:141] libmachine: (ha-322980-m05) DBG | I0505 21:39:03.546205   40291 retry.go:31] will retry after 460.354187ms: waiting for machine to come up
	I0505 21:39:04.009259   40254 main.go:141] libmachine: (ha-322980-m05) DBG | domain ha-322980-m05 has defined MAC address 52:54:00:a2:73:06 in network mk-ha-322980
	I0505 21:39:04.009725   40254 main.go:141] libmachine: (ha-322980-m05) DBG | unable to find current IP address of domain ha-322980-m05 in network mk-ha-322980
	I0505 21:39:04.009757   40254 main.go:141] libmachine: (ha-322980-m05) DBG | I0505 21:39:04.009671   40291 retry.go:31] will retry after 940.299192ms: waiting for machine to come up
	I0505 21:39:04.952135   40254 main.go:141] libmachine: (ha-322980-m05) DBG | domain ha-322980-m05 has defined MAC address 52:54:00:a2:73:06 in network mk-ha-322980
	I0505 21:39:04.952679   40254 main.go:141] libmachine: (ha-322980-m05) DBG | unable to find current IP address of domain ha-322980-m05 in network mk-ha-322980
	I0505 21:39:04.952711   40254 main.go:141] libmachine: (ha-322980-m05) DBG | I0505 21:39:04.952628   40291 retry.go:31] will retry after 883.967422ms: waiting for machine to come up
	I0505 21:39:05.838325   40254 main.go:141] libmachine: (ha-322980-m05) DBG | domain ha-322980-m05 has defined MAC address 52:54:00:a2:73:06 in network mk-ha-322980
	I0505 21:39:05.838748   40254 main.go:141] libmachine: (ha-322980-m05) DBG | unable to find current IP address of domain ha-322980-m05 in network mk-ha-322980
	I0505 21:39:05.838781   40254 main.go:141] libmachine: (ha-322980-m05) DBG | I0505 21:39:05.838703   40291 retry.go:31] will retry after 1.262385006s: waiting for machine to come up
	I0505 21:39:07.102687   40254 main.go:141] libmachine: (ha-322980-m05) DBG | domain ha-322980-m05 has defined MAC address 52:54:00:a2:73:06 in network mk-ha-322980
	I0505 21:39:07.103154   40254 main.go:141] libmachine: (ha-322980-m05) DBG | unable to find current IP address of domain ha-322980-m05 in network mk-ha-322980
	I0505 21:39:07.103187   40254 main.go:141] libmachine: (ha-322980-m05) DBG | I0505 21:39:07.103104   40291 retry.go:31] will retry after 1.250263095s: waiting for machine to come up
	I0505 21:39:08.355399   40254 main.go:141] libmachine: (ha-322980-m05) DBG | domain ha-322980-m05 has defined MAC address 52:54:00:a2:73:06 in network mk-ha-322980
	I0505 21:39:08.355888   40254 main.go:141] libmachine: (ha-322980-m05) DBG | unable to find current IP address of domain ha-322980-m05 in network mk-ha-322980
	I0505 21:39:08.355916   40254 main.go:141] libmachine: (ha-322980-m05) DBG | I0505 21:39:08.355833   40291 retry.go:31] will retry after 1.662218531s: waiting for machine to come up
	I0505 21:39:10.020680   40254 main.go:141] libmachine: (ha-322980-m05) DBG | domain ha-322980-m05 has defined MAC address 52:54:00:a2:73:06 in network mk-ha-322980
	I0505 21:39:10.021227   40254 main.go:141] libmachine: (ha-322980-m05) DBG | unable to find current IP address of domain ha-322980-m05 in network mk-ha-322980
	I0505 21:39:10.021258   40254 main.go:141] libmachine: (ha-322980-m05) DBG | I0505 21:39:10.021148   40291 retry.go:31] will retry after 1.803508582s: waiting for machine to come up
	I0505 21:39:11.826804   40254 main.go:141] libmachine: (ha-322980-m05) DBG | domain ha-322980-m05 has defined MAC address 52:54:00:a2:73:06 in network mk-ha-322980
	I0505 21:39:11.827346   40254 main.go:141] libmachine: (ha-322980-m05) DBG | unable to find current IP address of domain ha-322980-m05 in network mk-ha-322980
	I0505 21:39:11.827388   40254 main.go:141] libmachine: (ha-322980-m05) DBG | I0505 21:39:11.827302   40291 retry.go:31] will retry after 2.34436299s: waiting for machine to come up
	I0505 21:39:14.173595   40254 main.go:141] libmachine: (ha-322980-m05) DBG | domain ha-322980-m05 has defined MAC address 52:54:00:a2:73:06 in network mk-ha-322980
	I0505 21:39:14.174113   40254 main.go:141] libmachine: (ha-322980-m05) DBG | unable to find current IP address of domain ha-322980-m05 in network mk-ha-322980
	I0505 21:39:14.174143   40254 main.go:141] libmachine: (ha-322980-m05) DBG | I0505 21:39:14.174063   40291 retry.go:31] will retry after 2.763628619s: waiting for machine to come up
	I0505 21:39:16.939842   40254 main.go:141] libmachine: (ha-322980-m05) DBG | domain ha-322980-m05 has defined MAC address 52:54:00:a2:73:06 in network mk-ha-322980
	I0505 21:39:16.940365   40254 main.go:141] libmachine: (ha-322980-m05) DBG | unable to find current IP address of domain ha-322980-m05 in network mk-ha-322980
	I0505 21:39:16.940393   40254 main.go:141] libmachine: (ha-322980-m05) DBG | I0505 21:39:16.940322   40291 retry.go:31] will retry after 4.462876092s: waiting for machine to come up
	I0505 21:39:21.404487   40254 main.go:141] libmachine: (ha-322980-m05) DBG | domain ha-322980-m05 has defined MAC address 52:54:00:a2:73:06 in network mk-ha-322980
	I0505 21:39:21.404960   40254 main.go:141] libmachine: (ha-322980-m05) DBG | unable to find current IP address of domain ha-322980-m05 in network mk-ha-322980
	I0505 21:39:21.404993   40254 main.go:141] libmachine: (ha-322980-m05) DBG | I0505 21:39:21.404914   40291 retry.go:31] will retry after 6.59327036s: waiting for machine to come up
	I0505 21:39:28.001843   40254 main.go:141] libmachine: (ha-322980-m05) DBG | domain ha-322980-m05 has defined MAC address 52:54:00:a2:73:06 in network mk-ha-322980
	I0505 21:39:28.002274   40254 main.go:141] libmachine: (ha-322980-m05) Found IP for machine: 192.168.39.30
	I0505 21:39:28.002299   40254 main.go:141] libmachine: (ha-322980-m05) Reserving static IP address...
	I0505 21:39:28.002313   40254 main.go:141] libmachine: (ha-322980-m05) DBG | domain ha-322980-m05 has current primary IP address 192.168.39.30 and MAC address 52:54:00:a2:73:06 in network mk-ha-322980
	I0505 21:39:28.002681   40254 main.go:141] libmachine: (ha-322980-m05) DBG | unable to find host DHCP lease matching {name: "ha-322980-m05", mac: "52:54:00:a2:73:06", ip: "192.168.39.30"} in network mk-ha-322980
	I0505 21:39:28.077447   40254 main.go:141] libmachine: (ha-322980-m05) DBG | Getting to WaitForSSH function...
	I0505 21:39:28.077477   40254 main.go:141] libmachine: (ha-322980-m05) Reserved static IP address: 192.168.39.30
	I0505 21:39:28.077508   40254 main.go:141] libmachine: (ha-322980-m05) Waiting for SSH to be available...
	I0505 21:39:28.079957   40254 main.go:141] libmachine: (ha-322980-m05) DBG | domain ha-322980-m05 has defined MAC address 52:54:00:a2:73:06 in network mk-ha-322980
	I0505 21:39:28.080598   40254 main.go:141] libmachine: (ha-322980-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:73:06", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:39:16 +0000 UTC Type:0 Mac:52:54:00:a2:73:06 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a2:73:06}
	I0505 21:39:28.080630   40254 main.go:141] libmachine: (ha-322980-m05) DBG | domain ha-322980-m05 has defined IP address 192.168.39.30 and MAC address 52:54:00:a2:73:06 in network mk-ha-322980
	I0505 21:39:28.080742   40254 main.go:141] libmachine: (ha-322980-m05) DBG | Using SSH client type: external
	I0505 21:39:28.080771   40254 main.go:141] libmachine: (ha-322980-m05) DBG | Using SSH private key: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m05/id_rsa (-rw-------)
	I0505 21:39:28.080801   40254 main.go:141] libmachine: (ha-322980-m05) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.30 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m05/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0505 21:39:28.080814   40254 main.go:141] libmachine: (ha-322980-m05) DBG | About to run SSH command:
	I0505 21:39:28.080828   40254 main.go:141] libmachine: (ha-322980-m05) DBG | exit 0
	I0505 21:39:28.208430   40254 main.go:141] libmachine: (ha-322980-m05) DBG | SSH cmd err, output: <nil>: 
	I0505 21:39:28.208708   40254 main.go:141] libmachine: (ha-322980-m05) KVM machine creation complete!
	I0505 21:39:28.209045   40254 main.go:141] libmachine: (ha-322980-m05) Calling .GetConfigRaw
	I0505 21:39:28.209632   40254 main.go:141] libmachine: (ha-322980-m05) Calling .DriverName
	I0505 21:39:28.209860   40254 main.go:141] libmachine: (ha-322980-m05) Calling .DriverName
	I0505 21:39:28.210067   40254 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0505 21:39:28.210084   40254 main.go:141] libmachine: (ha-322980-m05) Calling .GetState
	I0505 21:39:28.211475   40254 main.go:141] libmachine: Detecting operating system of created instance...
	I0505 21:39:28.211518   40254 main.go:141] libmachine: Waiting for SSH to be available...
	I0505 21:39:28.211527   40254 main.go:141] libmachine: Getting to WaitForSSH function...
	I0505 21:39:28.211535   40254 main.go:141] libmachine: (ha-322980-m05) Calling .GetSSHHostname
	I0505 21:39:28.214656   40254 main.go:141] libmachine: (ha-322980-m05) DBG | domain ha-322980-m05 has defined MAC address 52:54:00:a2:73:06 in network mk-ha-322980
	I0505 21:39:28.215183   40254 main.go:141] libmachine: (ha-322980-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:73:06", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:39:16 +0000 UTC Type:0 Mac:52:54:00:a2:73:06 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-322980-m05 Clientid:01:52:54:00:a2:73:06}
	I0505 21:39:28.215213   40254 main.go:141] libmachine: (ha-322980-m05) DBG | domain ha-322980-m05 has defined IP address 192.168.39.30 and MAC address 52:54:00:a2:73:06 in network mk-ha-322980
	I0505 21:39:28.215329   40254 main.go:141] libmachine: (ha-322980-m05) Calling .GetSSHPort
	I0505 21:39:28.215538   40254 main.go:141] libmachine: (ha-322980-m05) Calling .GetSSHKeyPath
	I0505 21:39:28.215732   40254 main.go:141] libmachine: (ha-322980-m05) Calling .GetSSHKeyPath
	I0505 21:39:28.215886   40254 main.go:141] libmachine: (ha-322980-m05) Calling .GetSSHUsername
	I0505 21:39:28.216067   40254 main.go:141] libmachine: Using SSH client type: native
	I0505 21:39:28.216359   40254 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.30 22 <nil> <nil>}
	I0505 21:39:28.216376   40254 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0505 21:39:28.323520   40254 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0505 21:39:28.323550   40254 main.go:141] libmachine: Detecting the provisioner...
	I0505 21:39:28.323561   40254 main.go:141] libmachine: (ha-322980-m05) Calling .GetSSHHostname
	I0505 21:39:28.326429   40254 main.go:141] libmachine: (ha-322980-m05) DBG | domain ha-322980-m05 has defined MAC address 52:54:00:a2:73:06 in network mk-ha-322980
	I0505 21:39:28.326961   40254 main.go:141] libmachine: (ha-322980-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:73:06", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:39:16 +0000 UTC Type:0 Mac:52:54:00:a2:73:06 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-322980-m05 Clientid:01:52:54:00:a2:73:06}
	I0505 21:39:28.326995   40254 main.go:141] libmachine: (ha-322980-m05) DBG | domain ha-322980-m05 has defined IP address 192.168.39.30 and MAC address 52:54:00:a2:73:06 in network mk-ha-322980
	I0505 21:39:28.327205   40254 main.go:141] libmachine: (ha-322980-m05) Calling .GetSSHPort
	I0505 21:39:28.327425   40254 main.go:141] libmachine: (ha-322980-m05) Calling .GetSSHKeyPath
	I0505 21:39:28.327658   40254 main.go:141] libmachine: (ha-322980-m05) Calling .GetSSHKeyPath
	I0505 21:39:28.327810   40254 main.go:141] libmachine: (ha-322980-m05) Calling .GetSSHUsername
	I0505 21:39:28.327973   40254 main.go:141] libmachine: Using SSH client type: native
	I0505 21:39:28.328157   40254 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.30 22 <nil> <nil>}
	I0505 21:39:28.328168   40254 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0505 21:39:28.441265   40254 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0505 21:39:28.441352   40254 main.go:141] libmachine: found compatible host: buildroot
	I0505 21:39:28.441362   40254 main.go:141] libmachine: Provisioning with buildroot...
	I0505 21:39:28.441369   40254 main.go:141] libmachine: (ha-322980-m05) Calling .GetMachineName
	I0505 21:39:28.441630   40254 buildroot.go:166] provisioning hostname "ha-322980-m05"
	I0505 21:39:28.441655   40254 main.go:141] libmachine: (ha-322980-m05) Calling .GetMachineName
	I0505 21:39:28.441870   40254 main.go:141] libmachine: (ha-322980-m05) Calling .GetSSHHostname
	I0505 21:39:28.444473   40254 main.go:141] libmachine: (ha-322980-m05) DBG | domain ha-322980-m05 has defined MAC address 52:54:00:a2:73:06 in network mk-ha-322980
	I0505 21:39:28.444892   40254 main.go:141] libmachine: (ha-322980-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:73:06", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:39:16 +0000 UTC Type:0 Mac:52:54:00:a2:73:06 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-322980-m05 Clientid:01:52:54:00:a2:73:06}
	I0505 21:39:28.444944   40254 main.go:141] libmachine: (ha-322980-m05) DBG | domain ha-322980-m05 has defined IP address 192.168.39.30 and MAC address 52:54:00:a2:73:06 in network mk-ha-322980
	I0505 21:39:28.445013   40254 main.go:141] libmachine: (ha-322980-m05) Calling .GetSSHPort
	I0505 21:39:28.445198   40254 main.go:141] libmachine: (ha-322980-m05) Calling .GetSSHKeyPath
	I0505 21:39:28.445400   40254 main.go:141] libmachine: (ha-322980-m05) Calling .GetSSHKeyPath
	I0505 21:39:28.445560   40254 main.go:141] libmachine: (ha-322980-m05) Calling .GetSSHUsername
	I0505 21:39:28.445752   40254 main.go:141] libmachine: Using SSH client type: native
	I0505 21:39:28.445967   40254 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.30 22 <nil> <nil>}
	I0505 21:39:28.445984   40254 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-322980-m05 && echo "ha-322980-m05" | sudo tee /etc/hostname
	I0505 21:39:28.573551   40254 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-322980-m05
	
	I0505 21:39:28.573603   40254 main.go:141] libmachine: (ha-322980-m05) Calling .GetSSHHostname
	I0505 21:39:28.576547   40254 main.go:141] libmachine: (ha-322980-m05) DBG | domain ha-322980-m05 has defined MAC address 52:54:00:a2:73:06 in network mk-ha-322980
	I0505 21:39:28.576991   40254 main.go:141] libmachine: (ha-322980-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:73:06", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:39:16 +0000 UTC Type:0 Mac:52:54:00:a2:73:06 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-322980-m05 Clientid:01:52:54:00:a2:73:06}
	I0505 21:39:28.577023   40254 main.go:141] libmachine: (ha-322980-m05) DBG | domain ha-322980-m05 has defined IP address 192.168.39.30 and MAC address 52:54:00:a2:73:06 in network mk-ha-322980
	I0505 21:39:28.577189   40254 main.go:141] libmachine: (ha-322980-m05) Calling .GetSSHPort
	I0505 21:39:28.577371   40254 main.go:141] libmachine: (ha-322980-m05) Calling .GetSSHKeyPath
	I0505 21:39:28.577551   40254 main.go:141] libmachine: (ha-322980-m05) Calling .GetSSHKeyPath
	I0505 21:39:28.577693   40254 main.go:141] libmachine: (ha-322980-m05) Calling .GetSSHUsername
	I0505 21:39:28.577890   40254 main.go:141] libmachine: Using SSH client type: native
	I0505 21:39:28.578039   40254 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.30 22 <nil> <nil>}
	I0505 21:39:28.578061   40254 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-322980-m05' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-322980-m05/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-322980-m05' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0505 21:39:28.693701   40254 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0505 21:39:28.693777   40254 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18602-11466/.minikube CaCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18602-11466/.minikube}
	I0505 21:39:28.693818   40254 buildroot.go:174] setting up certificates
	I0505 21:39:28.693827   40254 provision.go:84] configureAuth start
	I0505 21:39:28.693839   40254 main.go:141] libmachine: (ha-322980-m05) Calling .GetMachineName
	I0505 21:39:28.694141   40254 main.go:141] libmachine: (ha-322980-m05) Calling .GetIP
	I0505 21:39:28.696626   40254 main.go:141] libmachine: (ha-322980-m05) DBG | domain ha-322980-m05 has defined MAC address 52:54:00:a2:73:06 in network mk-ha-322980
	I0505 21:39:28.697054   40254 main.go:141] libmachine: (ha-322980-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:73:06", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:39:16 +0000 UTC Type:0 Mac:52:54:00:a2:73:06 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-322980-m05 Clientid:01:52:54:00:a2:73:06}
	I0505 21:39:28.697074   40254 main.go:141] libmachine: (ha-322980-m05) DBG | domain ha-322980-m05 has defined IP address 192.168.39.30 and MAC address 52:54:00:a2:73:06 in network mk-ha-322980
	I0505 21:39:28.697236   40254 main.go:141] libmachine: (ha-322980-m05) Calling .GetSSHHostname
	I0505 21:39:28.699547   40254 main.go:141] libmachine: (ha-322980-m05) DBG | domain ha-322980-m05 has defined MAC address 52:54:00:a2:73:06 in network mk-ha-322980
	I0505 21:39:28.699978   40254 main.go:141] libmachine: (ha-322980-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:73:06", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:39:16 +0000 UTC Type:0 Mac:52:54:00:a2:73:06 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-322980-m05 Clientid:01:52:54:00:a2:73:06}
	I0505 21:39:28.700005   40254 main.go:141] libmachine: (ha-322980-m05) DBG | domain ha-322980-m05 has defined IP address 192.168.39.30 and MAC address 52:54:00:a2:73:06 in network mk-ha-322980
	I0505 21:39:28.700134   40254 provision.go:143] copyHostCerts
	I0505 21:39:28.700157   40254 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem
	I0505 21:39:28.700200   40254 exec_runner.go:144] found /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem, removing ...
	I0505 21:39:28.700208   40254 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem
	I0505 21:39:28.700268   40254 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem (1078 bytes)
	I0505 21:39:28.700357   40254 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem
	I0505 21:39:28.700374   40254 exec_runner.go:144] found /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem, removing ...
	I0505 21:39:28.700380   40254 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem
	I0505 21:39:28.700408   40254 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem (1123 bytes)
	I0505 21:39:28.700463   40254 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem
	I0505 21:39:28.700480   40254 exec_runner.go:144] found /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem, removing ...
	I0505 21:39:28.700487   40254 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem
	I0505 21:39:28.700507   40254 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem (1675 bytes)
	I0505 21:39:28.700568   40254 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem org=jenkins.ha-322980-m05 san=[127.0.0.1 192.168.39.30 ha-322980-m05 localhost minikube]
	I0505 21:39:29.029602   40254 provision.go:177] copyRemoteCerts
	I0505 21:39:29.029654   40254 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0505 21:39:29.029677   40254 main.go:141] libmachine: (ha-322980-m05) Calling .GetSSHHostname
	I0505 21:39:29.032398   40254 main.go:141] libmachine: (ha-322980-m05) DBG | domain ha-322980-m05 has defined MAC address 52:54:00:a2:73:06 in network mk-ha-322980
	I0505 21:39:29.032725   40254 main.go:141] libmachine: (ha-322980-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:73:06", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:39:16 +0000 UTC Type:0 Mac:52:54:00:a2:73:06 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-322980-m05 Clientid:01:52:54:00:a2:73:06}
	I0505 21:39:29.032780   40254 main.go:141] libmachine: (ha-322980-m05) DBG | domain ha-322980-m05 has defined IP address 192.168.39.30 and MAC address 52:54:00:a2:73:06 in network mk-ha-322980
	I0505 21:39:29.032981   40254 main.go:141] libmachine: (ha-322980-m05) Calling .GetSSHPort
	I0505 21:39:29.033150   40254 main.go:141] libmachine: (ha-322980-m05) Calling .GetSSHKeyPath
	I0505 21:39:29.033320   40254 main.go:141] libmachine: (ha-322980-m05) Calling .GetSSHUsername
	I0505 21:39:29.033516   40254 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m05/id_rsa Username:docker}
	I0505 21:39:29.120283   40254 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0505 21:39:29.120352   40254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0505 21:39:29.149042   40254 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0505 21:39:29.149151   40254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0505 21:39:29.179873   40254 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0505 21:39:29.179935   40254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0505 21:39:29.208984   40254 provision.go:87] duration metric: took 515.147101ms to configureAuth
	I0505 21:39:29.209008   40254 buildroot.go:189] setting minikube options for container-runtime
	I0505 21:39:29.209204   40254 config.go:182] Loaded profile config "ha-322980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 21:39:29.209277   40254 main.go:141] libmachine: (ha-322980-m05) Calling .GetSSHHostname
	I0505 21:39:29.212082   40254 main.go:141] libmachine: (ha-322980-m05) DBG | domain ha-322980-m05 has defined MAC address 52:54:00:a2:73:06 in network mk-ha-322980
	I0505 21:39:29.212572   40254 main.go:141] libmachine: (ha-322980-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:73:06", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:39:16 +0000 UTC Type:0 Mac:52:54:00:a2:73:06 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-322980-m05 Clientid:01:52:54:00:a2:73:06}
	I0505 21:39:29.212602   40254 main.go:141] libmachine: (ha-322980-m05) DBG | domain ha-322980-m05 has defined IP address 192.168.39.30 and MAC address 52:54:00:a2:73:06 in network mk-ha-322980
	I0505 21:39:29.212792   40254 main.go:141] libmachine: (ha-322980-m05) Calling .GetSSHPort
	I0505 21:39:29.213037   40254 main.go:141] libmachine: (ha-322980-m05) Calling .GetSSHKeyPath
	I0505 21:39:29.213244   40254 main.go:141] libmachine: (ha-322980-m05) Calling .GetSSHKeyPath
	I0505 21:39:29.213430   40254 main.go:141] libmachine: (ha-322980-m05) Calling .GetSSHUsername
	I0505 21:39:29.213600   40254 main.go:141] libmachine: Using SSH client type: native
	I0505 21:39:29.213812   40254 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.30 22 <nil> <nil>}
	I0505 21:39:29.213840   40254 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0505 21:39:29.496030   40254 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0505 21:39:29.496091   40254 main.go:141] libmachine: Checking connection to Docker...
	I0505 21:39:29.496103   40254 main.go:141] libmachine: (ha-322980-m05) Calling .GetURL
	I0505 21:39:29.497410   40254 main.go:141] libmachine: (ha-322980-m05) DBG | Using libvirt version 6000000
	I0505 21:39:29.499675   40254 main.go:141] libmachine: (ha-322980-m05) DBG | domain ha-322980-m05 has defined MAC address 52:54:00:a2:73:06 in network mk-ha-322980
	I0505 21:39:29.500102   40254 main.go:141] libmachine: (ha-322980-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:73:06", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:39:16 +0000 UTC Type:0 Mac:52:54:00:a2:73:06 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-322980-m05 Clientid:01:52:54:00:a2:73:06}
	I0505 21:39:29.500131   40254 main.go:141] libmachine: (ha-322980-m05) DBG | domain ha-322980-m05 has defined IP address 192.168.39.30 and MAC address 52:54:00:a2:73:06 in network mk-ha-322980
	I0505 21:39:29.500317   40254 main.go:141] libmachine: Docker is up and running!
	I0505 21:39:29.500330   40254 main.go:141] libmachine: Reticulating splines...
	I0505 21:39:29.500336   40254 client.go:171] duration metric: took 29.212758128s to LocalClient.Create
	I0505 21:39:29.500355   40254 start.go:167] duration metric: took 29.212823799s to libmachine.API.Create "ha-322980"
	I0505 21:39:29.500364   40254 start.go:293] postStartSetup for "ha-322980-m05" (driver="kvm2")
	I0505 21:39:29.500378   40254 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0505 21:39:29.500401   40254 main.go:141] libmachine: (ha-322980-m05) Calling .DriverName
	I0505 21:39:29.500640   40254 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0505 21:39:29.500671   40254 main.go:141] libmachine: (ha-322980-m05) Calling .GetSSHHostname
	I0505 21:39:29.502782   40254 main.go:141] libmachine: (ha-322980-m05) DBG | domain ha-322980-m05 has defined MAC address 52:54:00:a2:73:06 in network mk-ha-322980
	I0505 21:39:29.503191   40254 main.go:141] libmachine: (ha-322980-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:73:06", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:39:16 +0000 UTC Type:0 Mac:52:54:00:a2:73:06 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-322980-m05 Clientid:01:52:54:00:a2:73:06}
	I0505 21:39:29.503219   40254 main.go:141] libmachine: (ha-322980-m05) DBG | domain ha-322980-m05 has defined IP address 192.168.39.30 and MAC address 52:54:00:a2:73:06 in network mk-ha-322980
	I0505 21:39:29.503334   40254 main.go:141] libmachine: (ha-322980-m05) Calling .GetSSHPort
	I0505 21:39:29.503553   40254 main.go:141] libmachine: (ha-322980-m05) Calling .GetSSHKeyPath
	I0505 21:39:29.503735   40254 main.go:141] libmachine: (ha-322980-m05) Calling .GetSSHUsername
	I0505 21:39:29.503888   40254 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m05/id_rsa Username:docker}
	I0505 21:39:29.588459   40254 ssh_runner.go:195] Run: cat /etc/os-release
	I0505 21:39:29.593335   40254 info.go:137] Remote host: Buildroot 2023.02.9
	I0505 21:39:29.593358   40254 filesync.go:126] Scanning /home/jenkins/minikube-integration/18602-11466/.minikube/addons for local assets ...
	I0505 21:39:29.593424   40254 filesync.go:126] Scanning /home/jenkins/minikube-integration/18602-11466/.minikube/files for local assets ...
	I0505 21:39:29.593539   40254 filesync.go:149] local asset: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem -> 187982.pem in /etc/ssl/certs
	I0505 21:39:29.593554   40254 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem -> /etc/ssl/certs/187982.pem
	I0505 21:39:29.593632   40254 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0505 21:39:29.604226   40254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem --> /etc/ssl/certs/187982.pem (1708 bytes)
	I0505 21:39:29.631502   40254 start.go:296] duration metric: took 131.101667ms for postStartSetup
	I0505 21:39:29.631553   40254 main.go:141] libmachine: (ha-322980-m05) Calling .GetConfigRaw
	I0505 21:39:29.632325   40254 main.go:141] libmachine: (ha-322980-m05) Calling .GetIP
	I0505 21:39:29.635051   40254 main.go:141] libmachine: (ha-322980-m05) DBG | domain ha-322980-m05 has defined MAC address 52:54:00:a2:73:06 in network mk-ha-322980
	I0505 21:39:29.635569   40254 main.go:141] libmachine: (ha-322980-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:73:06", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:39:16 +0000 UTC Type:0 Mac:52:54:00:a2:73:06 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-322980-m05 Clientid:01:52:54:00:a2:73:06}
	I0505 21:39:29.635602   40254 main.go:141] libmachine: (ha-322980-m05) DBG | domain ha-322980-m05 has defined IP address 192.168.39.30 and MAC address 52:54:00:a2:73:06 in network mk-ha-322980
	I0505 21:39:29.635906   40254 profile.go:143] Saving config to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/config.json ...
	I0505 21:39:29.636095   40254 start.go:128] duration metric: took 29.36898698s to createHost
	I0505 21:39:29.636117   40254 main.go:141] libmachine: (ha-322980-m05) Calling .GetSSHHostname
	I0505 21:39:29.638451   40254 main.go:141] libmachine: (ha-322980-m05) DBG | domain ha-322980-m05 has defined MAC address 52:54:00:a2:73:06 in network mk-ha-322980
	I0505 21:39:29.638887   40254 main.go:141] libmachine: (ha-322980-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:73:06", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:39:16 +0000 UTC Type:0 Mac:52:54:00:a2:73:06 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-322980-m05 Clientid:01:52:54:00:a2:73:06}
	I0505 21:39:29.638913   40254 main.go:141] libmachine: (ha-322980-m05) DBG | domain ha-322980-m05 has defined IP address 192.168.39.30 and MAC address 52:54:00:a2:73:06 in network mk-ha-322980
	I0505 21:39:29.639059   40254 main.go:141] libmachine: (ha-322980-m05) Calling .GetSSHPort
	I0505 21:39:29.639254   40254 main.go:141] libmachine: (ha-322980-m05) Calling .GetSSHKeyPath
	I0505 21:39:29.639470   40254 main.go:141] libmachine: (ha-322980-m05) Calling .GetSSHKeyPath
	I0505 21:39:29.639618   40254 main.go:141] libmachine: (ha-322980-m05) Calling .GetSSHUsername
	I0505 21:39:29.639789   40254 main.go:141] libmachine: Using SSH client type: native
	I0505 21:39:29.639993   40254 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.30 22 <nil> <nil>}
	I0505 21:39:29.640010   40254 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0505 21:39:29.744868   40254 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714945169.736257026
	
	I0505 21:39:29.744894   40254 fix.go:216] guest clock: 1714945169.736257026
	I0505 21:39:29.744903   40254 fix.go:229] Guest: 2024-05-05 21:39:29.736257026 +0000 UTC Remote: 2024-05-05 21:39:29.636108072 +0000 UTC m=+29.613249397 (delta=100.148954ms)
	I0505 21:39:29.744940   40254 fix.go:200] guest clock delta is within tolerance: 100.148954ms
	I0505 21:39:29.744947   40254 start.go:83] releasing machines lock for "ha-322980-m05", held for 29.477992828s
	I0505 21:39:29.744965   40254 main.go:141] libmachine: (ha-322980-m05) Calling .DriverName
	I0505 21:39:29.745229   40254 main.go:141] libmachine: (ha-322980-m05) Calling .GetIP
	I0505 21:39:29.747875   40254 main.go:141] libmachine: (ha-322980-m05) DBG | domain ha-322980-m05 has defined MAC address 52:54:00:a2:73:06 in network mk-ha-322980
	I0505 21:39:29.748270   40254 main.go:141] libmachine: (ha-322980-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:73:06", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:39:16 +0000 UTC Type:0 Mac:52:54:00:a2:73:06 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-322980-m05 Clientid:01:52:54:00:a2:73:06}
	I0505 21:39:29.748302   40254 main.go:141] libmachine: (ha-322980-m05) DBG | domain ha-322980-m05 has defined IP address 192.168.39.30 and MAC address 52:54:00:a2:73:06 in network mk-ha-322980
	I0505 21:39:29.748453   40254 main.go:141] libmachine: (ha-322980-m05) Calling .DriverName
	I0505 21:39:29.748995   40254 main.go:141] libmachine: (ha-322980-m05) Calling .DriverName
	I0505 21:39:29.749181   40254 main.go:141] libmachine: (ha-322980-m05) Calling .DriverName
	I0505 21:39:29.749274   40254 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0505 21:39:29.749308   40254 main.go:141] libmachine: (ha-322980-m05) Calling .GetSSHHostname
	I0505 21:39:29.749426   40254 ssh_runner.go:195] Run: systemctl --version
	I0505 21:39:29.749445   40254 main.go:141] libmachine: (ha-322980-m05) Calling .GetSSHHostname
	I0505 21:39:29.751781   40254 main.go:141] libmachine: (ha-322980-m05) DBG | domain ha-322980-m05 has defined MAC address 52:54:00:a2:73:06 in network mk-ha-322980
	I0505 21:39:29.751969   40254 main.go:141] libmachine: (ha-322980-m05) DBG | domain ha-322980-m05 has defined MAC address 52:54:00:a2:73:06 in network mk-ha-322980
	I0505 21:39:29.752162   40254 main.go:141] libmachine: (ha-322980-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:73:06", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:39:16 +0000 UTC Type:0 Mac:52:54:00:a2:73:06 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-322980-m05 Clientid:01:52:54:00:a2:73:06}
	I0505 21:39:29.752200   40254 main.go:141] libmachine: (ha-322980-m05) DBG | domain ha-322980-m05 has defined IP address 192.168.39.30 and MAC address 52:54:00:a2:73:06 in network mk-ha-322980
	I0505 21:39:29.752275   40254 main.go:141] libmachine: (ha-322980-m05) Calling .GetSSHPort
	I0505 21:39:29.752447   40254 main.go:141] libmachine: (ha-322980-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:73:06", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:39:16 +0000 UTC Type:0 Mac:52:54:00:a2:73:06 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-322980-m05 Clientid:01:52:54:00:a2:73:06}
	I0505 21:39:29.752469   40254 main.go:141] libmachine: (ha-322980-m05) DBG | domain ha-322980-m05 has defined IP address 192.168.39.30 and MAC address 52:54:00:a2:73:06 in network mk-ha-322980
	I0505 21:39:29.752476   40254 main.go:141] libmachine: (ha-322980-m05) Calling .GetSSHKeyPath
	I0505 21:39:29.752639   40254 main.go:141] libmachine: (ha-322980-m05) Calling .GetSSHPort
	I0505 21:39:29.752651   40254 main.go:141] libmachine: (ha-322980-m05) Calling .GetSSHUsername
	I0505 21:39:29.752831   40254 main.go:141] libmachine: (ha-322980-m05) Calling .GetSSHKeyPath
	I0505 21:39:29.752826   40254 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m05/id_rsa Username:docker}
	I0505 21:39:29.752993   40254 main.go:141] libmachine: (ha-322980-m05) Calling .GetSSHUsername
	I0505 21:39:29.753162   40254 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980-m05/id_rsa Username:docker}
	I0505 21:39:29.840481   40254 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0505 21:39:30.012472   40254 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0505 21:39:30.020275   40254 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0505 21:39:30.020345   40254 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0505 21:39:30.038659   40254 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0505 21:39:30.038686   40254 start.go:494] detecting cgroup driver to use...
	I0505 21:39:30.038759   40254 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0505 21:39:30.059209   40254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0505 21:39:30.075536   40254 docker.go:217] disabling cri-docker service (if available) ...
	I0505 21:39:30.075611   40254 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0505 21:39:30.091565   40254 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0505 21:39:30.109535   40254 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0505 21:39:30.255305   40254 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0505 21:39:30.430272   40254 docker.go:233] disabling docker service ...
	I0505 21:39:30.430363   40254 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0505 21:39:30.448666   40254 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0505 21:39:30.463159   40254 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0505 21:39:30.616329   40254 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0505 21:39:30.761566   40254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0505 21:39:30.778880   40254 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0505 21:39:30.800455   40254 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0505 21:39:30.800528   40254 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:39:30.812327   40254 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0505 21:39:30.812422   40254 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:39:30.824216   40254 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:39:30.836112   40254 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:39:30.847723   40254 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0505 21:39:30.860129   40254 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:39:30.872201   40254 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:39:30.893360   40254 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:39:30.907182   40254 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0505 21:39:30.918791   40254 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0505 21:39:30.918859   40254 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0505 21:39:30.935301   40254 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0505 21:39:30.946530   40254 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 21:39:31.093956   40254 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0505 21:39:31.270780   40254 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0505 21:39:31.270891   40254 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0505 21:39:31.277393   40254 start.go:562] Will wait 60s for crictl version
	I0505 21:39:31.277465   40254 ssh_runner.go:195] Run: which crictl
	I0505 21:39:31.281865   40254 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0505 21:39:31.323179   40254 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0505 21:39:31.323245   40254 ssh_runner.go:195] Run: crio --version
	I0505 21:39:31.363203   40254 ssh_runner.go:195] Run: crio --version
	I0505 21:39:31.397393   40254 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0505 21:39:31.398776   40254 main.go:141] libmachine: (ha-322980-m05) Calling .GetIP
	I0505 21:39:31.401414   40254 main.go:141] libmachine: (ha-322980-m05) DBG | domain ha-322980-m05 has defined MAC address 52:54:00:a2:73:06 in network mk-ha-322980
	I0505 21:39:31.401882   40254 main.go:141] libmachine: (ha-322980-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:73:06", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:39:16 +0000 UTC Type:0 Mac:52:54:00:a2:73:06 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:ha-322980-m05 Clientid:01:52:54:00:a2:73:06}
	I0505 21:39:31.401916   40254 main.go:141] libmachine: (ha-322980-m05) DBG | domain ha-322980-m05 has defined IP address 192.168.39.30 and MAC address 52:54:00:a2:73:06 in network mk-ha-322980
	I0505 21:39:31.402190   40254 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0505 21:39:31.406985   40254 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0505 21:39:31.421058   40254 mustload.go:65] Loading cluster: ha-322980
	I0505 21:39:31.421325   40254 config.go:182] Loaded profile config "ha-322980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 21:39:31.421560   40254 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:39:31.421592   40254 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:39:31.437353   40254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46839
	I0505 21:39:31.437775   40254 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:39:31.438251   40254 main.go:141] libmachine: Using API Version  1
	I0505 21:39:31.438274   40254 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:39:31.438582   40254 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:39:31.438742   40254 main.go:141] libmachine: (ha-322980) Calling .GetState
	I0505 21:39:31.440252   40254 host.go:66] Checking if "ha-322980" exists ...
	I0505 21:39:31.440522   40254 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:39:31.440562   40254 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:39:31.455691   40254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33955
	I0505 21:39:31.456039   40254 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:39:31.456535   40254 main.go:141] libmachine: Using API Version  1
	I0505 21:39:31.456558   40254 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:39:31.456839   40254 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:39:31.457016   40254 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:39:31.457206   40254 certs.go:68] Setting up /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980 for IP: 192.168.39.30
	I0505 21:39:31.457217   40254 certs.go:194] generating shared ca certs ...
	I0505 21:39:31.457235   40254 certs.go:226] acquiring lock for ca certs: {Name:mk9a8f97724855697074b08194c6247f8f9e5c84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 21:39:31.457375   40254 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key
	I0505 21:39:31.457439   40254 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key
	I0505 21:39:31.457450   40254 certs.go:256] generating profile certs ...
	I0505 21:39:31.457525   40254 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/client.key
	I0505 21:39:31.457548   40254 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key.43fd18e2
	I0505 21:39:31.457568   40254 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt.43fd18e2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.178 192.168.39.228 192.168.39.30 192.168.39.254]
	I0505 21:39:31.584740   40254 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt.43fd18e2 ...
	I0505 21:39:31.584771   40254 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt.43fd18e2: {Name:mkcb6ec29d3ee12ca4772a5ffa3fe4454906821e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 21:39:31.584937   40254 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key.43fd18e2 ...
	I0505 21:39:31.584950   40254 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key.43fd18e2: {Name:mkfa3ea229f8c38095e535ecfbdfeab6bd095143 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 21:39:31.585016   40254 certs.go:381] copying /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt.43fd18e2 -> /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt
	I0505 21:39:31.585159   40254 certs.go:385] copying /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key.43fd18e2 -> /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key
	I0505 21:39:31.585281   40254 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.key
	I0505 21:39:31.585295   40254 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0505 21:39:31.585312   40254 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0505 21:39:31.585325   40254 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0505 21:39:31.585337   40254 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0505 21:39:31.585351   40254 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0505 21:39:31.585370   40254 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0505 21:39:31.585382   40254 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0505 21:39:31.585394   40254 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0505 21:39:31.585440   40254 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798.pem (1338 bytes)
	W0505 21:39:31.585466   40254 certs.go:480] ignoring /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798_empty.pem, impossibly tiny 0 bytes
	I0505 21:39:31.585476   40254 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem (1675 bytes)
	I0505 21:39:31.585503   40254 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem (1078 bytes)
	I0505 21:39:31.585527   40254 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem (1123 bytes)
	I0505 21:39:31.585546   40254 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem (1675 bytes)
	I0505 21:39:31.585581   40254 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem (1708 bytes)
	I0505 21:39:31.585608   40254 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798.pem -> /usr/share/ca-certificates/18798.pem
	I0505 21:39:31.585621   40254 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem -> /usr/share/ca-certificates/187982.pem
	I0505 21:39:31.585631   40254 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0505 21:39:31.585665   40254 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:39:31.588760   40254 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:39:31.589208   40254 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:39:31.589240   40254 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:39:31.589451   40254 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:39:31.589618   40254 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:39:31.589723   40254 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:39:31.589857   40254 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa Username:docker}
	I0505 21:39:31.667896   40254 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0505 21:39:31.674980   40254 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0505 21:39:31.688776   40254 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0505 21:39:31.693589   40254 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0505 21:39:31.704887   40254 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0505 21:39:31.709915   40254 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0505 21:39:31.721139   40254 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0505 21:39:31.725737   40254 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0505 21:39:31.737636   40254 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0505 21:39:31.743963   40254 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0505 21:39:31.758348   40254 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0505 21:39:31.763705   40254 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0505 21:39:31.777477   40254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0505 21:39:31.807641   40254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0505 21:39:31.837319   40254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0505 21:39:31.865327   40254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0505 21:39:31.892953   40254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0505 21:39:31.921805   40254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0505 21:39:31.948008   40254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0505 21:39:31.975763   40254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0505 21:39:32.006010   40254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798.pem --> /usr/share/ca-certificates/18798.pem (1338 bytes)
	I0505 21:39:32.033840   40254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem --> /usr/share/ca-certificates/187982.pem (1708 bytes)
	I0505 21:39:32.061559   40254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0505 21:39:32.088438   40254 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0505 21:39:32.109406   40254 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0505 21:39:32.128923   40254 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0505 21:39:32.147996   40254 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0505 21:39:32.168152   40254 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0505 21:39:32.188768   40254 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0505 21:39:32.208591   40254 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0505 21:39:32.229067   40254 ssh_runner.go:195] Run: openssl version
	I0505 21:39:32.235770   40254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/187982.pem && ln -fs /usr/share/ca-certificates/187982.pem /etc/ssl/certs/187982.pem"
	I0505 21:39:32.248057   40254 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/187982.pem
	I0505 21:39:32.253627   40254 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  5 21:11 /usr/share/ca-certificates/187982.pem
	I0505 21:39:32.253714   40254 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/187982.pem
	I0505 21:39:32.260924   40254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/187982.pem /etc/ssl/certs/3ec20f2e.0"
	I0505 21:39:32.274682   40254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0505 21:39:32.287810   40254 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0505 21:39:32.293229   40254 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  5 20:58 /usr/share/ca-certificates/minikubeCA.pem
	I0505 21:39:32.293283   40254 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0505 21:39:32.299847   40254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0505 21:39:32.314996   40254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18798.pem && ln -fs /usr/share/ca-certificates/18798.pem /etc/ssl/certs/18798.pem"
	I0505 21:39:32.328627   40254 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18798.pem
	I0505 21:39:32.334275   40254 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  5 21:11 /usr/share/ca-certificates/18798.pem
	I0505 21:39:32.334333   40254 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18798.pem
	I0505 21:39:32.341391   40254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18798.pem /etc/ssl/certs/51391683.0"
	I0505 21:39:32.356845   40254 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0505 21:39:32.362103   40254 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0505 21:39:32.362171   40254 kubeadm.go:928] updating node {m05 192.168.39.30 8443 v1.30.0  true true} ...
	I0505 21:39:32.362284   40254 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-322980-m05 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.30
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-322980 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0505 21:39:32.362311   40254 kube-vip.go:111] generating kube-vip config ...
	I0505 21:39:32.362343   40254 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0505 21:39:32.381856   40254 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0505 21:39:32.381967   40254 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0505 21:39:32.382035   40254 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0505 21:39:32.392786   40254 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0505 21:39:32.392946   40254 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0505 21:39:32.404154   40254 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0505 21:39:32.404192   40254 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/cache/linux/amd64/v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0505 21:39:32.404201   40254 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256
	I0505 21:39:32.404218   40254 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/cache/linux/amd64/v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0505 21:39:32.404259   40254 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0505 21:39:32.404154   40254 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256
	I0505 21:39:32.404282   40254 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0505 21:39:32.404305   40254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 21:39:32.422937   40254 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0505 21:39:32.422997   40254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/cache/linux/amd64/v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0505 21:39:32.422960   40254 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/cache/linux/amd64/v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0505 21:39:32.423139   40254 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0505 21:39:32.422948   40254 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0505 21:39:32.423192   40254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/cache/linux/amd64/v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0505 21:39:32.455777   40254 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0505 21:39:32.455827   40254 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/cache/linux/amd64/v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0505 21:39:33.398653   40254 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0505 21:39:33.409301   40254 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0505 21:39:33.428675   40254 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0505 21:39:33.450340   40254 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0505 21:39:33.470859   40254 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0505 21:39:33.475936   40254 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0505 21:39:33.491327   40254 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 21:39:33.632859   40254 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0505 21:39:33.653485   40254 host.go:66] Checking if "ha-322980" exists ...
	I0505 21:39:33.653901   40254 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:39:33.653950   40254 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:39:33.668779   40254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42675
	I0505 21:39:33.669168   40254 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:39:33.669614   40254 main.go:141] libmachine: Using API Version  1
	I0505 21:39:33.669639   40254 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:39:33.669942   40254 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:39:33.670124   40254 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:39:33.670321   40254 start.go:316] joinCluster: &{Name:ha-322980 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cluster
Name:ha-322980 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.178 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.169 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m05 IP:192.168.39.30 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:fal
se gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 21:39:33.670529   40254 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0505 21:39:33.670548   40254 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:39:33.673961   40254 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:39:33.674437   40254 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:39:33.674468   40254 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:39:33.674653   40254 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:39:33.674846   40254 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:39:33.675004   40254 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:39:33.675194   40254 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa Username:docker}
	I0505 21:39:33.849687   40254 start.go:342] trying to join control-plane node "m05" to cluster: &{Name:m05 IP:192.168.39.30 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:true Worker:true}
	I0505 21:39:33.849755   40254 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9ewu8l.vy6i7zygjz2zt0ve --discovery-token-ca-cert-hash sha256:6a26f18bad1f4f0bdeab00e50a185d34e4c63698b4d623f2ccf5d34207e02541 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-322980-m05 --control-plane --apiserver-advertise-address=192.168.39.30 --apiserver-bind-port=8443"
	I0505 21:41:54.731668   40254 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9ewu8l.vy6i7zygjz2zt0ve --discovery-token-ca-cert-hash sha256:6a26f18bad1f4f0bdeab00e50a185d34e4c63698b4d623f2ccf5d34207e02541 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-322980-m05 --control-plane --apiserver-advertise-address=192.168.39.30 --apiserver-bind-port=8443": (2m20.881881519s)
	E0505 21:41:54.731761   40254 start.go:344] control-plane node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9ewu8l.vy6i7zygjz2zt0ve --discovery-token-ca-cert-hash sha256:6a26f18bad1f4f0bdeab00e50a185d34e4c63698b4d623f2ccf5d34207e02541 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-322980-m05 --control-plane --apiserver-advertise-address=192.168.39.30 --apiserver-bind-port=8443": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	[preflight] Running pre-flight checks before initializing the new control plane instance
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-322980-m05 localhost] and IPs [192.168.39.30 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-322980-m05 localhost] and IPs [192.168.39.30 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Using the existing "apiserver" certificate and key
	[certs] Valid certificates and keys now exist in "/var/lib/minikube/certs"
	[certs] Using the existing "sa" key
	[kubeconfig] Generating kubeconfig files
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[check-etcd] Checking that the etcd cluster is healthy
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase check-etcd: etcd cluster is not healthy: failed to dial endpoint https://192.168.39.29:2379 with maintenance client: context deadline exceeded
	To see the stack trace of this error execute with --v=5 or higher
	I0505 21:41:54.731791   40254 start.go:347] resetting control-plane node "m05" before attempting to rejoin cluster...
	I0505 21:41:54.731810   40254 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --force"
	I0505 21:41:54.881851   40254 start.go:351] successfully reset control-plane node "m05"
	I0505 21:41:54.881910   40254 retry.go:31] will retry after 13.207088772s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9ewu8l.vy6i7zygjz2zt0ve --discovery-token-ca-cert-hash sha256:6a26f18bad1f4f0bdeab00e50a185d34e4c63698b4d623f2ccf5d34207e02541 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-322980-m05 --control-plane --apiserver-advertise-address=192.168.39.30 --apiserver-bind-port=8443": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	[preflight] Running pre-flight checks before initializing the new control plane instance
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-322980-m05 localhost] and IPs [192.168.39.30 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-322980-m05 localhost] and IPs [192.168.39.30 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Using the existing "apiserver" certificate and key
	[certs] Valid certificates and keys now exist in "/var/lib/minikube/certs"
	[certs] Using the existing "sa" key
	[kubeconfig] Generating kubeconfig files
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[check-etcd] Checking that the etcd cluster is healthy
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase check-etcd: etcd cluster is not healthy: failed to dial endpoint https://192.168.39.29:2379 with maintenance client: context deadline exceeded
	To see the stack trace of this error execute with --v=5 or higher
	I0505 21:42:08.090129   40254 start.go:342] trying to join control-plane node "m05" to cluster: &{Name:m05 IP:192.168.39.30 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:true Worker:true}
	I0505 21:42:08.090227   40254 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9ewu8l.vy6i7zygjz2zt0ve --discovery-token-ca-cert-hash sha256:6a26f18bad1f4f0bdeab00e50a185d34e4c63698b4d623f2ccf5d34207e02541 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-322980-m05 --control-plane --apiserver-advertise-address=192.168.39.30 --apiserver-bind-port=8443"
	I0505 21:44:11.425789   40254 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9ewu8l.vy6i7zygjz2zt0ve --discovery-token-ca-cert-hash sha256:6a26f18bad1f4f0bdeab00e50a185d34e4c63698b4d623f2ccf5d34207e02541 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-322980-m05 --control-plane --apiserver-advertise-address=192.168.39.30 --apiserver-bind-port=8443": (2m3.335491113s)
	E0505 21:44:11.425879   40254 start.go:344] control-plane node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9ewu8l.vy6i7zygjz2zt0ve --discovery-token-ca-cert-hash sha256:6a26f18bad1f4f0bdeab00e50a185d34e4c63698b4d623f2ccf5d34207e02541 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-322980-m05 --control-plane --apiserver-advertise-address=192.168.39.30 --apiserver-bind-port=8443": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	[preflight] Running pre-flight checks before initializing the new control plane instance
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using the existing "etcd/peer" certificate and key
	[certs] Using the existing "apiserver-etcd-client" certificate and key
	[certs] Using the existing "etcd/server" certificate and key
	[certs] Using the existing "etcd/healthcheck-client" certificate and key
	[certs] Using the existing "apiserver" certificate and key
	[certs] Using the existing "apiserver-kubelet-client" certificate and key
	[certs] Using the existing "front-proxy-client" certificate and key
	[certs] Valid certificates and keys now exist in "/var/lib/minikube/certs"
	[certs] Using the existing "sa" key
	[kubeconfig] Generating kubeconfig files
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[check-etcd] Checking that the etcd cluster is healthy
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase check-etcd: etcd cluster is not healthy: failed to dial endpoint https://192.168.39.29:2379 with maintenance client: context deadline exceeded
	To see the stack trace of this error execute with --v=5 or higher
	I0505 21:44:11.425898   40254 start.go:347] resetting control-plane node "m05" before attempting to rejoin cluster...
	I0505 21:44:11.425911   40254 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --force"
	I0505 21:44:11.576378   40254 start.go:351] successfully reset control-plane node "m05"
	I0505 21:44:11.576438   40254 start.go:318] duration metric: took 4m37.906119209s to joinCluster
	I0505 21:44:11.579222   40254 out.go:177] 
	W0505 21:44:11.580682   40254 out.go:239] X Exiting due to GUEST_NODE_ADD: failed to add node: join node to cluster: error joining control-plane node "m05" to cluster: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9ewu8l.vy6i7zygjz2zt0ve --discovery-token-ca-cert-hash sha256:6a26f18bad1f4f0bdeab00e50a185d34e4c63698b4d623f2ccf5d34207e02541 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-322980-m05 --control-plane --apiserver-advertise-address=192.168.39.30 --apiserver-bind-port=8443": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	[preflight] Running pre-flight checks before initializing the new control plane instance
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using the existing "etcd/peer" certificate and key
	[certs] Using the existing "apiserver-etcd-client" certificate and key
	[certs] Using the existing "etcd/server" certificate and key
	[certs] Using the existing "etcd/healthcheck-client" certificate and key
	[certs] Using the existing "apiserver" certificate and key
	[certs] Using the existing "apiserver-kubelet-client" certificate and key
	[certs] Using the existing "front-proxy-client" certificate and key
	[certs] Valid certificates and keys now exist in "/var/lib/minikube/certs"
	[certs] Using the existing "sa" key
	[kubeconfig] Generating kubeconfig files
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[check-etcd] Checking that the etcd cluster is healthy
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase check-etcd: etcd cluster is not healthy: failed to dial endpoint https://192.168.39.29:2379 with maintenance client: context deadline exceeded
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_NODE_ADD: failed to add node: join node to cluster: error joining control-plane node "m05" to cluster: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9ewu8l.vy6i7zygjz2zt0ve --discovery-token-ca-cert-hash sha256:6a26f18bad1f4f0bdeab00e50a185d34e4c63698b4d623f2ccf5d34207e02541 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-322980-m05 --control-plane --apiserver-advertise-address=192.168.39.30 --apiserver-bind-port=8443": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	[preflight] Running pre-flight checks before initializing the new control plane instance
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using the existing "etcd/peer" certificate and key
	[certs] Using the existing "apiserver-etcd-client" certificate and key
	[certs] Using the existing "etcd/server" certificate and key
	[certs] Using the existing "etcd/healthcheck-client" certificate and key
	[certs] Using the existing "apiserver" certificate and key
	[certs] Using the existing "apiserver-kubelet-client" certificate and key
	[certs] Using the existing "front-proxy-client" certificate and key
	[certs] Valid certificates and keys now exist in "/var/lib/minikube/certs"
	[certs] Using the existing "sa" key
	[kubeconfig] Generating kubeconfig files
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[check-etcd] Checking that the etcd cluster is healthy
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase check-etcd: etcd cluster is not healthy: failed to dial endpoint https://192.168.39.29:2379 with maintenance client: context deadline exceeded
	To see the stack trace of this error execute with --v=5 or higher
	
	W0505 21:44:11.580706   40254 out.go:239] * 
	* 
	W0505 21:44:11.582834   40254 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0505 21:44:11.584601   40254 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-linux-amd64 node add -p ha-322980 --control-plane -v=7 --alsologtostderr" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-322980 -n ha-322980
helpers_test.go:244: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-322980 logs -n 25: (1.965047776s)
helpers_test.go:252: TestMultiControlPlane/serial/AddSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-322980 ssh -n                                                                 | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n ha-322980-m04 sudo cat                                          | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | /home/docker/cp-test_ha-322980-m03_ha-322980-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-322980 cp testdata/cp-test.txt                                                | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n                                                                 | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-322980 cp ha-322980-m04:/home/docker/cp-test.txt                              | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3379972730/001/cp-test_ha-322980-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n                                                                 | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-322980 cp ha-322980-m04:/home/docker/cp-test.txt                              | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980:/home/docker/cp-test_ha-322980-m04_ha-322980.txt                       |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n                                                                 | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n ha-322980 sudo cat                                              | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | /home/docker/cp-test_ha-322980-m04_ha-322980.txt                                 |           |         |         |                     |                     |
	| cp      | ha-322980 cp ha-322980-m04:/home/docker/cp-test.txt                              | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m02:/home/docker/cp-test_ha-322980-m04_ha-322980-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n                                                                 | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n ha-322980-m02 sudo cat                                          | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | /home/docker/cp-test_ha-322980-m04_ha-322980-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-322980 cp ha-322980-m04:/home/docker/cp-test.txt                              | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m03:/home/docker/cp-test_ha-322980-m04_ha-322980-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n                                                                 | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | ha-322980-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-322980 ssh -n ha-322980-m03 sudo cat                                          | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC | 05 May 24 21:21 UTC |
	|         | /home/docker/cp-test_ha-322980-m04_ha-322980-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-322980 node stop m02 -v=7                                                     | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:21 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-322980 node start m02 -v=7                                                    | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:23 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-322980 -v=7                                                           | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:24 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-322980 -v=7                                                                | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:24 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-322980 --wait=true -v=7                                                    | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:26 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-322980                                                                | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:30 UTC |                     |
	| node    | ha-322980 node delete m03 -v=7                                                   | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:30 UTC | 05 May 24 21:30 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-322980 stop -v=7                                                              | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:30 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-322980 --wait=true                                                         | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:33 UTC | 05 May 24 21:38 UTC |
	|         | -v=7 --alsologtostderr                                                           |           |         |         |                     |                     |
	|         | --driver=kvm2                                                                    |           |         |         |                     |                     |
	|         | --container-runtime=crio                                                         |           |         |         |                     |                     |
	| node    | add -p ha-322980                                                                 | ha-322980 | jenkins | v1.33.0 | 05 May 24 21:39 UTC |                     |
	|         | --control-plane -v=7                                                             |           |         |         |                     |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/05 21:33:10
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0505 21:33:10.782529   38613 out.go:291] Setting OutFile to fd 1 ...
	I0505 21:33:10.782760   38613 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 21:33:10.782768   38613 out.go:304] Setting ErrFile to fd 2...
	I0505 21:33:10.782773   38613 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 21:33:10.782956   38613 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18602-11466/.minikube/bin
	I0505 21:33:10.783453   38613 out.go:298] Setting JSON to false
	I0505 21:33:10.784332   38613 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4538,"bootTime":1714940253,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0505 21:33:10.784386   38613 start.go:139] virtualization: kvm guest
	I0505 21:33:10.786936   38613 out.go:177] * [ha-322980] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0505 21:33:10.788876   38613 notify.go:220] Checking for updates...
	I0505 21:33:10.788888   38613 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 21:33:10.791377   38613 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 21:33:10.793001   38613 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18602-11466/kubeconfig
	I0505 21:33:10.794561   38613 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18602-11466/.minikube
	I0505 21:33:10.795915   38613 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0505 21:33:10.797150   38613 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 21:33:10.798973   38613 config.go:182] Loaded profile config "ha-322980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 21:33:10.799587   38613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:33:10.799647   38613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:33:10.814241   38613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37657
	I0505 21:33:10.814665   38613 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:33:10.815181   38613 main.go:141] libmachine: Using API Version  1
	I0505 21:33:10.815208   38613 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:33:10.815590   38613 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:33:10.815794   38613 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:33:10.816055   38613 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 21:33:10.816382   38613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:33:10.816426   38613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:33:10.830082   38613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45043
	I0505 21:33:10.830482   38613 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:33:10.830883   38613 main.go:141] libmachine: Using API Version  1
	I0505 21:33:10.830898   38613 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:33:10.831163   38613 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:33:10.831324   38613 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:33:10.864696   38613 out.go:177] * Using the kvm2 driver based on existing profile
	I0505 21:33:10.865966   38613 start.go:297] selected driver: kvm2
	I0505 21:33:10.866000   38613 start.go:901] validating driver "kvm2" against &{Name:ha-322980 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.0 ClusterName:ha-322980 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.178 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.169 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 21:33:10.866177   38613 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 21:33:10.866507   38613 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 21:33:10.866610   38613 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18602-11466/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0505 21:33:10.880610   38613 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0505 21:33:10.881289   38613 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0505 21:33:10.881361   38613 cni.go:84] Creating CNI manager for ""
	I0505 21:33:10.881375   38613 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0505 21:33:10.881440   38613 start.go:340] cluster config:
	{Name:ha-322980 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-322980 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.178 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.169 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 21:33:10.881580   38613 iso.go:125] acquiring lock: {Name:mk05a37c5afd8d748706c016c1665e3846f7161e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 21:33:10.883606   38613 out.go:177] * Starting "ha-322980" primary control-plane node in "ha-322980" cluster
	I0505 21:33:10.885094   38613 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0505 21:33:10.885132   38613 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18602-11466/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0505 21:33:10.885142   38613 cache.go:56] Caching tarball of preloaded images
	I0505 21:33:10.885235   38613 preload.go:173] Found /home/jenkins/minikube-integration/18602-11466/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0505 21:33:10.885248   38613 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0505 21:33:10.885436   38613 profile.go:143] Saving config to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/config.json ...
	I0505 21:33:10.885835   38613 start.go:360] acquireMachinesLock for ha-322980: {Name:mk7f653ea73351d572a4896c6b37d2e7aa44b4ac Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 21:33:10.885894   38613 start.go:364] duration metric: took 32.037µs to acquireMachinesLock for "ha-322980"
	I0505 21:33:10.885940   38613 start.go:96] Skipping create...Using existing machine configuration
	I0505 21:33:10.885948   38613 fix.go:54] fixHost starting: 
	I0505 21:33:10.886344   38613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:33:10.886392   38613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:33:10.899924   38613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45613
	I0505 21:33:10.900292   38613 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:33:10.900720   38613 main.go:141] libmachine: Using API Version  1
	I0505 21:33:10.900739   38613 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:33:10.901034   38613 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:33:10.901262   38613 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:33:10.901419   38613 main.go:141] libmachine: (ha-322980) Calling .GetState
	I0505 21:33:10.903122   38613 fix.go:112] recreateIfNeeded on ha-322980: state=Running err=<nil>
	W0505 21:33:10.903139   38613 fix.go:138] unexpected machine state, will restart: <nil>
	I0505 21:33:10.906147   38613 out.go:177] * Updating the running kvm2 "ha-322980" VM ...
	I0505 21:33:10.907677   38613 machine.go:94] provisionDockerMachine start ...
	I0505 21:33:10.907696   38613 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:33:10.907909   38613 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:33:10.910386   38613 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:33:10.910788   38613 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:33:10.910828   38613 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:33:10.910915   38613 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:33:10.911055   38613 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:33:10.911185   38613 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:33:10.911292   38613 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:33:10.911412   38613 main.go:141] libmachine: Using SSH client type: native
	I0505 21:33:10.911644   38613 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0505 21:33:10.911658   38613 main.go:141] libmachine: About to run SSH command:
	hostname
	I0505 21:33:11.024818   38613 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-322980
	
	I0505 21:33:11.024856   38613 main.go:141] libmachine: (ha-322980) Calling .GetMachineName
	I0505 21:33:11.025108   38613 buildroot.go:166] provisioning hostname "ha-322980"
	I0505 21:33:11.025131   38613 main.go:141] libmachine: (ha-322980) Calling .GetMachineName
	I0505 21:33:11.025347   38613 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:33:11.028371   38613 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:33:11.028884   38613 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:33:11.028926   38613 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:33:11.029087   38613 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:33:11.029295   38613 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:33:11.029473   38613 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:33:11.029675   38613 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:33:11.029887   38613 main.go:141] libmachine: Using SSH client type: native
	I0505 21:33:11.030056   38613 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0505 21:33:11.030069   38613 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-322980 && echo "ha-322980" | sudo tee /etc/hostname
	I0505 21:33:11.157460   38613 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-322980
	
	I0505 21:33:11.157489   38613 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:33:11.160261   38613 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:33:11.160672   38613 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:33:11.160705   38613 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:33:11.160916   38613 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:33:11.161096   38613 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:33:11.161282   38613 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:33:11.161423   38613 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:33:11.161692   38613 main.go:141] libmachine: Using SSH client type: native
	I0505 21:33:11.161871   38613 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0505 21:33:11.161887   38613 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-322980' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-322980/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-322980' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0505 21:33:11.273101   38613 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0505 21:33:11.273127   38613 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18602-11466/.minikube CaCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18602-11466/.minikube}
	I0505 21:33:11.273149   38613 buildroot.go:174] setting up certificates
	I0505 21:33:11.273158   38613 provision.go:84] configureAuth start
	I0505 21:33:11.273165   38613 main.go:141] libmachine: (ha-322980) Calling .GetMachineName
	I0505 21:33:11.273418   38613 main.go:141] libmachine: (ha-322980) Calling .GetIP
	I0505 21:33:11.275717   38613 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:33:11.276046   38613 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:33:11.276079   38613 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:33:11.276175   38613 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:33:11.278338   38613 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:33:11.278689   38613 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:33:11.278718   38613 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:33:11.278852   38613 provision.go:143] copyHostCerts
	I0505 21:33:11.278886   38613 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem
	I0505 21:33:11.278919   38613 exec_runner.go:144] found /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem, removing ...
	I0505 21:33:11.278934   38613 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem
	I0505 21:33:11.278996   38613 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem (1675 bytes)
	I0505 21:33:11.279068   38613 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem
	I0505 21:33:11.279086   38613 exec_runner.go:144] found /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem, removing ...
	I0505 21:33:11.279093   38613 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem
	I0505 21:33:11.279115   38613 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem (1078 bytes)
	I0505 21:33:11.279193   38613 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem
	I0505 21:33:11.279216   38613 exec_runner.go:144] found /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem, removing ...
	I0505 21:33:11.279223   38613 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem
	I0505 21:33:11.279251   38613 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem (1123 bytes)
	I0505 21:33:11.279300   38613 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem org=jenkins.ha-322980 san=[127.0.0.1 192.168.39.178 ha-322980 localhost minikube]
	I0505 21:33:11.367639   38613 provision.go:177] copyRemoteCerts
	I0505 21:33:11.367696   38613 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0505 21:33:11.367717   38613 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:33:11.370327   38613 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:33:11.370734   38613 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:33:11.370764   38613 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:33:11.370920   38613 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:33:11.371098   38613 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:33:11.371248   38613 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:33:11.371380   38613 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa Username:docker}
	I0505 21:33:11.456075   38613 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0505 21:33:11.456139   38613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0505 21:33:11.485602   38613 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0505 21:33:11.485663   38613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0505 21:33:11.514274   38613 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0505 21:33:11.514333   38613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0505 21:33:11.542683   38613 provision.go:87] duration metric: took 269.514163ms to configureAuth
	I0505 21:33:11.542728   38613 buildroot.go:189] setting minikube options for container-runtime
	I0505 21:33:11.542972   38613 config.go:182] Loaded profile config "ha-322980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 21:33:11.543046   38613 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:33:11.545534   38613 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:33:11.545869   38613 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:33:11.545904   38613 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:33:11.546080   38613 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:33:11.546288   38613 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:33:11.546455   38613 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:33:11.546586   38613 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:33:11.546748   38613 main.go:141] libmachine: Using SSH client type: native
	I0505 21:33:11.546904   38613 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0505 21:33:11.546919   38613 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0505 21:34:46.337001   38613 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0505 21:34:46.337030   38613 machine.go:97] duration metric: took 1m35.429340867s to provisionDockerMachine
	I0505 21:34:46.337044   38613 start.go:293] postStartSetup for "ha-322980" (driver="kvm2")
	I0505 21:34:46.337058   38613 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0505 21:34:46.337077   38613 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:34:46.337379   38613 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0505 21:34:46.337405   38613 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:34:46.340780   38613 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:34:46.341321   38613 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:34:46.341347   38613 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:34:46.341537   38613 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:34:46.341725   38613 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:34:46.341925   38613 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:34:46.342071   38613 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa Username:docker}
	I0505 21:34:46.428444   38613 ssh_runner.go:195] Run: cat /etc/os-release
	I0505 21:34:46.434008   38613 info.go:137] Remote host: Buildroot 2023.02.9
	I0505 21:34:46.434035   38613 filesync.go:126] Scanning /home/jenkins/minikube-integration/18602-11466/.minikube/addons for local assets ...
	I0505 21:34:46.434115   38613 filesync.go:126] Scanning /home/jenkins/minikube-integration/18602-11466/.minikube/files for local assets ...
	I0505 21:34:46.434192   38613 filesync.go:149] local asset: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem -> 187982.pem in /etc/ssl/certs
	I0505 21:34:46.434204   38613 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem -> /etc/ssl/certs/187982.pem
	I0505 21:34:46.434324   38613 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0505 21:34:46.446123   38613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem --> /etc/ssl/certs/187982.pem (1708 bytes)
	I0505 21:34:46.477537   38613 start.go:296] duration metric: took 140.476449ms for postStartSetup
	I0505 21:34:46.477587   38613 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:34:46.477906   38613 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0505 21:34:46.477933   38613 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:34:46.480558   38613 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:34:46.481112   38613 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:34:46.481149   38613 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:34:46.481324   38613 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:34:46.481545   38613 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:34:46.481749   38613 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:34:46.481878   38613 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa Username:docker}
	W0505 21:34:46.567686   38613 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0505 21:34:46.567714   38613 fix.go:56] duration metric: took 1m35.681766071s for fixHost
	I0505 21:34:46.567736   38613 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:34:46.570884   38613 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:34:46.571348   38613 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:34:46.571372   38613 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:34:46.571704   38613 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:34:46.571919   38613 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:34:46.572078   38613 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:34:46.572197   38613 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:34:46.572349   38613 main.go:141] libmachine: Using SSH client type: native
	I0505 21:34:46.572513   38613 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0505 21:34:46.572527   38613 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0505 21:34:46.685387   38613 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714944886.653039262
	
	I0505 21:34:46.685408   38613 fix.go:216] guest clock: 1714944886.653039262
	I0505 21:34:46.685417   38613 fix.go:229] Guest: 2024-05-05 21:34:46.653039262 +0000 UTC Remote: 2024-05-05 21:34:46.567721145 +0000 UTC m=+95.833154387 (delta=85.318117ms)
	I0505 21:34:46.685462   38613 fix.go:200] guest clock delta is within tolerance: 85.318117ms
	I0505 21:34:46.685470   38613 start.go:83] releasing machines lock for "ha-322980", held for 1m35.799534181s
	I0505 21:34:46.685504   38613 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:34:46.685779   38613 main.go:141] libmachine: (ha-322980) Calling .GetIP
	I0505 21:34:46.688376   38613 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:34:46.688754   38613 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:34:46.688774   38613 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:34:46.688943   38613 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:34:46.689405   38613 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:34:46.689615   38613 main.go:141] libmachine: (ha-322980) Calling .DriverName
	I0505 21:34:46.689730   38613 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0505 21:34:46.689772   38613 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:34:46.689833   38613 ssh_runner.go:195] Run: cat /version.json
	I0505 21:34:46.689854   38613 main.go:141] libmachine: (ha-322980) Calling .GetSSHHostname
	I0505 21:34:46.692098   38613 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:34:46.692419   38613 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:34:46.692458   38613 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:34:46.692493   38613 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:34:46.692612   38613 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:34:46.692769   38613 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:34:46.692857   38613 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:34:46.692897   38613 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:34:46.692924   38613 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:34:46.693019   38613 main.go:141] libmachine: (ha-322980) Calling .GetSSHPort
	I0505 21:34:46.693086   38613 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa Username:docker}
	I0505 21:34:46.693152   38613 main.go:141] libmachine: (ha-322980) Calling .GetSSHKeyPath
	I0505 21:34:46.693285   38613 main.go:141] libmachine: (ha-322980) Calling .GetSSHUsername
	I0505 21:34:46.693413   38613 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/ha-322980/id_rsa Username:docker}
	I0505 21:34:46.793198   38613 ssh_runner.go:195] Run: systemctl --version
	I0505 21:34:46.799918   38613 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0505 21:34:46.967784   38613 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0505 21:34:46.976742   38613 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0505 21:34:46.976805   38613 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0505 21:34:46.988080   38613 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0505 21:34:46.988099   38613 start.go:494] detecting cgroup driver to use...
	I0505 21:34:46.988150   38613 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0505 21:34:47.010443   38613 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0505 21:34:47.026066   38613 docker.go:217] disabling cri-docker service (if available) ...
	I0505 21:34:47.026114   38613 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0505 21:34:47.042613   38613 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0505 21:34:47.057995   38613 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0505 21:34:47.228241   38613 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0505 21:34:47.435324   38613 docker.go:233] disabling docker service ...
	I0505 21:34:47.435388   38613 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0505 21:34:47.480905   38613 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0505 21:34:47.503453   38613 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0505 21:34:47.709390   38613 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0505 21:34:47.877540   38613 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0505 21:34:47.892588   38613 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0505 21:34:47.914880   38613 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0505 21:34:47.914932   38613 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:34:47.926389   38613 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0505 21:34:47.926447   38613 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:34:47.938068   38613 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:34:47.950016   38613 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:34:47.961805   38613 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0505 21:34:47.973537   38613 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:34:47.984660   38613 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:34:47.997519   38613 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:34:48.008809   38613 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0505 21:34:48.018939   38613 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0505 21:34:48.029137   38613 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 21:34:48.178867   38613 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0505 21:35:01.861041   38613 ssh_runner.go:235] Completed: sudo systemctl restart crio: (13.682138949s)
	I0505 21:35:01.861075   38613 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0505 21:35:01.861144   38613 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0505 21:35:01.867892   38613 start.go:562] Will wait 60s for crictl version
	I0505 21:35:01.867936   38613 ssh_runner.go:195] Run: which crictl
	I0505 21:35:01.872645   38613 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0505 21:35:01.916160   38613 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0505 21:35:01.916237   38613 ssh_runner.go:195] Run: crio --version
	I0505 21:35:01.950017   38613 ssh_runner.go:195] Run: crio --version
	I0505 21:35:01.984726   38613 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0505 21:35:01.986241   38613 main.go:141] libmachine: (ha-322980) Calling .GetIP
	I0505 21:35:01.988743   38613 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:35:01.989102   38613 main.go:141] libmachine: (ha-322980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:13:35", ip: ""} in network mk-ha-322980: {Iface:virbr1 ExpiryTime:2024-05-05 22:15:44 +0000 UTC Type:0 Mac:52:54:00:b4:13:35 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-322980 Clientid:01:52:54:00:b4:13:35}
	I0505 21:35:01.989121   38613 main.go:141] libmachine: (ha-322980) DBG | domain ha-322980 has defined IP address 192.168.39.178 and MAC address 52:54:00:b4:13:35 in network mk-ha-322980
	I0505 21:35:01.989310   38613 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0505 21:35:01.994803   38613 kubeadm.go:877] updating cluster {Name:ha-322980 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cl
usterName:ha-322980 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.178 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.169 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0505 21:35:01.994929   38613 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0505 21:35:01.994966   38613 ssh_runner.go:195] Run: sudo crictl images --output json
	I0505 21:35:02.048360   38613 crio.go:514] all images are preloaded for cri-o runtime.
	I0505 21:35:02.048387   38613 crio.go:433] Images already preloaded, skipping extraction
	I0505 21:35:02.048454   38613 ssh_runner.go:195] Run: sudo crictl images --output json
	I0505 21:35:02.126023   38613 crio.go:514] all images are preloaded for cri-o runtime.
	I0505 21:35:02.126047   38613 cache_images.go:84] Images are preloaded, skipping loading
	I0505 21:35:02.126058   38613 kubeadm.go:928] updating node { 192.168.39.178 8443 v1.30.0 crio true true} ...
	I0505 21:35:02.126177   38613 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-322980 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.178
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-322980 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0505 21:35:02.126252   38613 ssh_runner.go:195] Run: crio config
	I0505 21:35:02.184596   38613 cni.go:84] Creating CNI manager for ""
	I0505 21:35:02.184619   38613 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0505 21:35:02.184632   38613 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0505 21:35:02.184656   38613 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.178 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-322980 NodeName:ha-322980 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.178"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.178 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0505 21:35:02.184873   38613 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.178
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-322980"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.178
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.178"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0505 21:35:02.184895   38613 kube-vip.go:111] generating kube-vip config ...
	I0505 21:35:02.184945   38613 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0505 21:35:02.199099   38613 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0505 21:35:02.199204   38613 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0505 21:35:02.199274   38613 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0505 21:35:02.211132   38613 binaries.go:44] Found k8s binaries, skipping transfer
	I0505 21:35:02.211187   38613 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0505 21:35:02.223072   38613 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0505 21:35:02.243932   38613 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0505 21:35:02.262816   38613 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0505 21:35:02.282099   38613 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0505 21:35:02.303088   38613 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0505 21:35:02.308584   38613 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 21:35:02.485372   38613 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0505 21:35:02.502699   38613 certs.go:68] Setting up /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980 for IP: 192.168.39.178
	I0505 21:35:02.502728   38613 certs.go:194] generating shared ca certs ...
	I0505 21:35:02.502747   38613 certs.go:226] acquiring lock for ca certs: {Name:mk9a8f97724855697074b08194c6247f8f9e5c84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 21:35:02.502959   38613 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key
	I0505 21:35:02.503011   38613 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key
	I0505 21:35:02.503020   38613 certs.go:256] generating profile certs ...
	I0505 21:35:02.503117   38613 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/client.key
	I0505 21:35:02.503152   38613 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key.25d2ebee
	I0505 21:35:02.503170   38613 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt.25d2ebee with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.178 192.168.39.228 192.168.39.254]
	I0505 21:35:02.615495   38613 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt.25d2ebee ...
	I0505 21:35:02.615524   38613 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt.25d2ebee: {Name:mkbd06e5be9df4285b4c5e041a9f13c28c454f41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 21:35:02.615694   38613 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key.25d2ebee ...
	I0505 21:35:02.615707   38613 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key.25d2ebee: {Name:mka9d9e3d903e5eb3c82de934c4bbf64e46177c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 21:35:02.615774   38613 certs.go:381] copying /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt.25d2ebee -> /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt
	I0505 21:35:02.615924   38613 certs.go:385] copying /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key.25d2ebee -> /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key
	I0505 21:35:02.616058   38613 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.key
	I0505 21:35:02.616074   38613 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0505 21:35:02.616086   38613 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0505 21:35:02.616096   38613 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0505 21:35:02.616110   38613 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0505 21:35:02.616120   38613 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0505 21:35:02.616131   38613 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0505 21:35:02.616143   38613 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0505 21:35:02.616154   38613 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0505 21:35:02.616196   38613 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798.pem (1338 bytes)
	W0505 21:35:02.616228   38613 certs.go:480] ignoring /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798_empty.pem, impossibly tiny 0 bytes
	I0505 21:35:02.616239   38613 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem (1675 bytes)
	I0505 21:35:02.616265   38613 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem (1078 bytes)
	I0505 21:35:02.616287   38613 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem (1123 bytes)
	I0505 21:35:02.616328   38613 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem (1675 bytes)
	I0505 21:35:02.616367   38613 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem (1708 bytes)
	I0505 21:35:02.616404   38613 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem -> /usr/share/ca-certificates/187982.pem
	I0505 21:35:02.616417   38613 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0505 21:35:02.616429   38613 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798.pem -> /usr/share/ca-certificates/18798.pem
	I0505 21:35:02.617006   38613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0505 21:35:02.646904   38613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0505 21:35:02.673713   38613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0505 21:35:02.700277   38613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0505 21:35:02.726212   38613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0505 21:35:02.753256   38613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0505 21:35:02.779027   38613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0505 21:35:02.804672   38613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/ha-322980/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0505 21:35:02.830946   38613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem --> /usr/share/ca-certificates/187982.pem (1708 bytes)
	I0505 21:35:02.858318   38613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0505 21:35:02.884727   38613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798.pem --> /usr/share/ca-certificates/18798.pem (1338 bytes)
	I0505 21:35:02.909964   38613 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0505 21:35:02.928702   38613 ssh_runner.go:195] Run: openssl version
	I0505 21:35:02.935599   38613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/187982.pem && ln -fs /usr/share/ca-certificates/187982.pem /etc/ssl/certs/187982.pem"
	I0505 21:35:02.948266   38613 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/187982.pem
	I0505 21:35:02.953066   38613 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  5 21:11 /usr/share/ca-certificates/187982.pem
	I0505 21:35:02.953117   38613 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/187982.pem
	I0505 21:35:02.959128   38613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/187982.pem /etc/ssl/certs/3ec20f2e.0"
	I0505 21:35:02.969758   38613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0505 21:35:02.982206   38613 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0505 21:35:02.987075   38613 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  5 20:58 /usr/share/ca-certificates/minikubeCA.pem
	I0505 21:35:02.987124   38613 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0505 21:35:02.993226   38613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0505 21:35:03.004094   38613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18798.pem && ln -fs /usr/share/ca-certificates/18798.pem /etc/ssl/certs/18798.pem"
	I0505 21:35:03.017261   38613 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18798.pem
	I0505 21:35:03.022090   38613 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  5 21:11 /usr/share/ca-certificates/18798.pem
	I0505 21:35:03.022150   38613 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18798.pem
	I0505 21:35:03.028565   38613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18798.pem /etc/ssl/certs/51391683.0"
	I0505 21:35:03.040355   38613 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0505 21:35:03.046014   38613 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0505 21:35:03.052224   38613 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0505 21:35:03.058805   38613 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0505 21:35:03.064914   38613 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0505 21:35:03.070971   38613 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0505 21:35:03.077248   38613 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0505 21:35:03.083296   38613 kubeadm.go:391] StartCluster: {Name:ha-322980 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clust
erName:ha-322980 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.178 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.228 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.169 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 21:35:03.083422   38613 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0505 21:35:03.083475   38613 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0505 21:35:03.138563   38613 cri.go:89] found id: "f1b7497414f05d6ff38a79f45c12a13a66babda238cdd3d29d3259da5fc595e8"
	I0505 21:35:03.138587   38613 cri.go:89] found id: "355a3bf6a6f151e9f26eb348a08353e45581aa4cee3a2aa154b9e12ee27a638c"
	I0505 21:35:03.138592   38613 cri.go:89] found id: "94752b251c71a06932b73db8104e820e473d1d4494c78884ffe58ad4eb867d3b"
	I0505 21:35:03.138597   38613 cri.go:89] found id: "8b1fc54998e0d9f8e8908df7b252d9046c75539a9f8a7b3052965e4689c68bb7"
	I0505 21:35:03.138600   38613 cri.go:89] found id: "d64f6490c58bcaabe8dcfd1f6189e1e9b6ea82598e341f43024f2a406807c0bd"
	I0505 21:35:03.138605   38613 cri.go:89] found id: "d8e5582057ffa7695636c3c49a29af71b47c3e0e49d6d4f28aed3cb84503b54e"
	I0505 21:35:03.138609   38613 cri.go:89] found id: "b48ee84cd3ceb82bcfda671b85eb4b1b2793a18718f5c445445feaf19173b3d9"
	I0505 21:35:03.138612   38613 cri.go:89] found id: "0c012cc95d188bdded0cf101970a7bcf34d1c2860b11847a187db706e95a0138"
	I0505 21:35:03.138616   38613 cri.go:89] found id: "ea2d43ee9b97e09ed5e375d1d8edb724b3d836888d6a9fcd5ccb75b32e5d1424"
	I0505 21:35:03.138622   38613 cri.go:89] found id: "067837019b5f60ac2139d714d999a7c2c585578da4dc3279ac134cd69b882db6"
	I0505 21:35:03.138626   38613 cri.go:89] found id: "06be80792a085f9dea48d4219f306149664769dd55a7c6bc24d984254df5fc7d"
	I0505 21:35:03.138629   38613 cri.go:89] found id: "858ab02f25618649e61d3cb75cf0fbd9a3bfbbbd2a9556b0257647ec91b0c2b8"
	I0505 21:35:03.138633   38613 cri.go:89] found id: "852f56752c6434b6c374dd0257366bf739e210aee07f80277d63776fd528299b"
	I0505 21:35:03.138645   38613 cri.go:89] found id: "d864b4fda0bb945209c47a2404e8a012f5dac000a178c964de8c1e8cc8cb9a9f"
	I0505 21:35:03.138663   38613 cri.go:89] found id: "366a7799ffc65796c54bbea26b2f81953df8a6b0fd8399e51c1f3cf1f374ff2a"
	I0505 21:35:03.138671   38613 cri.go:89] found id: "0b360d142570dfeb8a0253680ccfd97861e90b700b1ce9e2e5b315244aed2a3b"
	I0505 21:35:03.138675   38613 cri.go:89] found id: "e065fafa4b7aad342c5755ddbb2c69c3b4bd0efd44c9ce792388bc6c6f06121d"
	I0505 21:35:03.138680   38613 cri.go:89] found id: "4da23c67204614119fd12bb11bb2dcc384609fe399a32f06e2ca61ff52a8438c"
	I0505 21:35:03.138688   38613 cri.go:89] found id: "d73ef383ce1ab83dd2b2ada78891a4aa3835de2b4805157ac3e15584d0b4b29b"
	I0505 21:35:03.138691   38613 cri.go:89] found id: "97769959b22d6e891106c90e8ae981d40cb02ac960e23bdfb007b3c50d50c923"
	I0505 21:35:03.138695   38613 cri.go:89] found id: ""
	I0505 21:35:03.138742   38613 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	May 05 21:44:12 ha-322980 crio[7013]: time="2024-05-05 21:44:12.373114141Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6a5d1515-891c-45bd-b1de-f9329a129139 name=/runtime.v1.RuntimeService/Version
	May 05 21:44:12 ha-322980 crio[7013]: time="2024-05-05 21:44:12.374928019Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5e9be7bc-5d56-44b9-b8c8-cf736e6b22c0 name=/runtime.v1.ImageService/ImageFsInfo
	May 05 21:44:12 ha-322980 crio[7013]: time="2024-05-05 21:44:12.375381453Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714945452375358183,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5e9be7bc-5d56-44b9-b8c8-cf736e6b22c0 name=/runtime.v1.ImageService/ImageFsInfo
	May 05 21:44:12 ha-322980 crio[7013]: time="2024-05-05 21:44:12.377634118Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a2d8380f-e80c-4ac1-a9af-b7641ec1c366 name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:44:12 ha-322980 crio[7013]: time="2024-05-05 21:44:12.377880032Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a2d8380f-e80c-4ac1-a9af-b7641ec1c366 name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:44:12 ha-322980 crio[7013]: time="2024-05-05 21:44:12.378544831Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bd915c3dba22af90f7aed0f6dfd347efd77a422f8013f9e16206a6c3c7c717d0,PodSandboxId:e5113c6a2b38ab36594e7ba32f4d0fd039515a896d822ee1b3e183bad36ca309,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:5,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714945085395448724,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 578ccf60a9d00c195d5069c63fb0b319,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 5,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4339bd42993b7bc6c2ff073b40194cacc51c70b40a765dc5cbd4c0af2c755035,PodSandboxId:a76a5d08be3f17280f3cdd8669a758c237119910ff913cbbf126b0f0cebbcc34,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714945082394863382,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc212ac3-7499-4edc-b5a5-622b0bd4a891,},Annotations:map[string]string{io.kubernetes.container.hash: c207764,io.kubernetes.container.restartCount: 6,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:455a734814e0474d32e5de7b335134c299d8cc37e15593029ec33a9856671f2c,PodSandboxId:89b1eb416984c61a9ee783cce71ac835f393c1c1dd30d6a85ae7ad723ca1fa3c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:6,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714945015391427036,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25cdcec1c37ba86157b0b42297dfe2cf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf39325,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de61fb39b57ed8a3eac293c7bf5d9a22d35b39c01d327216bcd936614ae3a3bc,PodSandboxId:e5113c6a2b38ab36594e7ba32f4d0fd039515a896d822ee1b3e183bad36ca309,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714944982392449199,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 578ccf60a9d00c195d5069c63fb0b319,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b68524c36b17be1b1024baa8244cb97f9821f0e32ef66ba49016dbc0a2ae5fee,PodSandboxId:e3d0f3523fcd003895abc8a674365bf48ff9c71feec1df3435b3b4071808e767,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:6,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714944952396569836,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lwtnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4033535e-69f1-426c-bb17-831fad6336d5,},Annotations:map[string]string{io.kubernetes.container.hash: 5400a271,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:745e34eb53f73dea3ae9ac6fb92e74e90c303e23b775e6f0d6bbb57966226631,PodSandboxId:0167f2cd9752c1404cdb331924609b7177cb5926ce734b74ae563c5a1f5324c1,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714944940760908908,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xt9l5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bbde9685-4494-40b7-bd53-9452fd970f5a,},Annotations:map[string]string{io.kubernetes.container.hash: ce6d6b7c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMe
ssagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b25281bb34e5c1f921a226ad41b62e5d2d646c890c919211d4225eaae5b64858,PodSandboxId:bf00cf4e243bb5324a02582b8c2cb7d36f921c9f8624a2ccb6d02b2d07c5b74f,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714944920405194507,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fqt45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27bdadca-f49c-4f50-b09c-07dd6067f39a,},Annotations:map[string]string{io.kubernetes.container.hash: 8fa26f22,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics
\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0edfa4169296485a2f562827e26303e40cdeb2eab4169475486ceb06b3016b78,PodSandboxId:6f6df1fe2bd0f62b6c9b71112bc546237fb71ae77df14a6bf028d6b474494156,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714944908513152034,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4b10859196db0958fa2b1c992ad5e8a,},Annotations:map[string]string{io.kubernetes.container.hash:
d7e5eb98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f29543fe60666ca79cdd859c65f4e012453caa34aec600bb348a902dc8bac60,PodSandboxId:d046b5739c4c45d427aa00dfa1c9714d9bacd98026868c7d0966df0d45941d0a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714944908851453754,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xdzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0b6492d-c0f5-45dd-8482-c447b81daa66,},Annotations:map[string]string{io.kubernetes.container.hash: 9b4684e6,io.kubernetes.container
.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b65a4994eb530985ae22b546d28c337201558888bd723bf7f7a07c3e2f787aa,PodSandboxId:a76a5d08be3f17280f3cdd8669a758c237119910ff913cbbf126b0f0cebbcc34,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714944907501240294,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc212ac3-7499-4edc-b5a5-622b0bd4a891,},Annotations:map[string]string{io.kubernetes.container.hash: c207764,io.kubernetes.container.restartCount:
5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36af6fb31f7f102488daa8a188158f589e68261fbcaddb9215e2e09d451bf266,PodSandboxId:624d55df9e644b6beffb63cc435323bbbf978f66c5779168395018eabbb1296d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714944907715159809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-78zmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e066e3ad-0574-44f9-acab-d7cec8b86788,},Annotations:map[string]string{io.kubernetes.container.hash: 3cdac550,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:036738ae8212722b43956751fe318ceda45cd3f7eee1b979e909231b72aa247c,PodSandboxId:31aaa7707c94161ffa1d606eb29705948b6fc211a4c669ac70d332e7f4c6c5f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714944907539612810,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 58f12977082107510fdbb696cd218155,},Annotations:map[string]string{io.kubernetes.container.hash: a7c6c285,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e5342fcbb6da2c858cf392ff9e1a2b3a5e9904303f6b5bc2e06929733f8c218,PodSandboxId:f2602411310b0a0a6ebb8339a9cfa4a64ed39717e590622b336bb3f71384b71c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714944907374401463,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c588feae7d6204945d
27bedaf4541d64,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3fb541b398164615ee73073fd074fad2f8e633b76127a0e52466c93d4bdf158,PodSandboxId:89b1eb416984c61a9ee783cce71ac835f393c1c1dd30d6a85ae7ad723ca1fa3c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:5,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714944907239170716,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25cdcec1c37ba86157b0b42297dfe2cf,},Annot
ations:map[string]string{io.kubernetes.container.hash: cdf39325,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1b7497414f05d6ff38a79f45c12a13a66babda238cdd3d29d3259da5fc595e8,PodSandboxId:36c9090a9259da22fc8a3a1cc16797c2b794c189f648ff411c22d72726ab2ad6,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714944887526154268,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fqt45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27bdadca-f49c-4f50-b09c-07dd6067f39a,},Annotations:map[string]string{io.kube
rnetes.container.hash: 8fa26f22,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:355a3bf6a6f151e9f26eb348a08353e45581aa4cee3a2aa154b9e12ee27a638c,PodSandboxId:64801e377a379e3032b60fb4e259d498edbd40bb44d06239b6567a8489f97eef,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:5,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714944741390863157,Labels:map[string]string{io.kubernetes.container.name:
kindnet-cni,io.kubernetes.pod.name: kindnet-lwtnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4033535e-69f1-426c-bb17-831fad6336d5,},Annotations:map[string]string{io.kubernetes.container.hash: 5400a271,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b1fc54998e0d9f8e8908df7b252d9046c75539a9f8a7b3052965e4689c68bb7,PodSandboxId:8e6a479fdea9d5e48225f93fa886dde1f176a498a35aebb5d75e093f2ca98e92,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1714944630237627038,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube
-vip-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4b10859196db0958fa2b1c992ad5e8a,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:378349efe1d23552d4480f47eb5b8f2985a9b9afe0582e8a15ae7bd295575d7f,PodSandboxId:9dfb38e6022a77884fc7f07c6d48d0ba1ff1031b9546850ba613c1a67c6eefa0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714944545718531570,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xt9l5,io.kubernetes.pod.name
space: default,io.kubernetes.pod.uid: bbde9685-4494-40b7-bd53-9452fd970f5a,},Annotations:map[string]string{io.kubernetes.container.hash: ce6d6b7c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:852f56752c6434b6c374dd0257366bf739e210aee07f80277d63776fd528299b,PodSandboxId:e36e99eaa4a61ccc966411e5a4efcee570acc74235b8c9da766e66844f3e432c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714944512450643891,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xdzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: d0b6492d-c0f5-45dd-8482-c447b81daa66,},Annotations:map[string]string{io.kubernetes.container.hash: 9b4684e6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:858ab02f25618649e61d3cb75cf0fbd9a3bfbbbd2a9556b0257647ec91b0c2b8,PodSandboxId:cd2a674999e8ab59ec8a5f057e40ce67c67724010b62a201c6abe9e71f6a3a30,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714944512601137699,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-78zmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e066e3ad-0574-44f9-acab-d7ce
c8b86788,},Annotations:map[string]string{io.kubernetes.container.hash: 3cdac550,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d864b4fda0bb945209c47a2404e8a012f5dac000a178c964de8c1e8cc8cb9a9f,PodSandboxId:4777f05174b29b55270c89c82d4189567c9b871410dfc1754a18984bcfbff1a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714944512361323713,Lab
els:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c588feae7d6204945d27bedaf4541d64,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:366a7799ffc65796c54bbea26b2f81953df8a6b0fd8399e51c1f3cf1f374ff2a,PodSandboxId:55b2bc86d17b354ac6e14cdd0acbe830b855d4ee17bd24922dff0f39203830a4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714944512349368052,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58f12977082107510fdbb696cd218155,},Annotations:map[string]string{io.kubernetes.container.hash: a7c6c285,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a2d8380f-e80c-4ac1-a9af-b7641ec1c366 name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:44:12 ha-322980 crio[7013]: time="2024-05-05 21:44:12.386491834Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=625c59df-4e77-4a1b-916b-6168369f8622 name=/runtime.v1.RuntimeService/ListPodSandbox
	May 05 21:44:12 ha-322980 crio[7013]: time="2024-05-05 21:44:12.387135997Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:0167f2cd9752c1404cdb331924609b7177cb5926ce734b74ae563c5a1f5324c1,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-xt9l5,Uid:bbde9685-4494-40b7-bd53-9452fd970f5a,Namespace:default,Attempt:2,},State:SANDBOX_READY,CreatedAt:1714944940602041423,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-xt9l5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bbde9685-4494-40b7-bd53-9452fd970f5a,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-05T21:19:47.677058601Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:624d55df9e644b6beffb63cc435323bbbf978f66c5779168395018eabbb1296d,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-78zmw,Uid:e066e3ad-0574-44f9-acab-d7cec8b86788,Namespace:kube-system,Attempt:2,},State:
SANDBOX_READY,CreatedAt:1714944907019550551,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-78zmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e066e3ad-0574-44f9-acab-d7cec8b86788,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-05T21:16:27.773341631Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bf00cf4e243bb5324a02582b8c2cb7d36f921c9f8624a2ccb6d02b2d07c5b74f,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-fqt45,Uid:27bdadca-f49c-4f50-b09c-07dd6067f39a,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1714944906925261066,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-fqt45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27bdadca-f49c-4f50-b09c-07dd6067f39a,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen:
2024-05-05T21:16:27.765070900Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6f6df1fe2bd0f62b6c9b71112bc546237fb71ae77df14a6bf028d6b474494156,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-322980,Uid:b4b10859196db0958fa2b1c992ad5e8a,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714944906920417981,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4b10859196db0958fa2b1c992ad5e8a,},Annotations:map[string]string{kubernetes.io/config.hash: b4b10859196db0958fa2b1c992ad5e8a,kubernetes.io/config.seen: 2024-05-05T21:28:27.142362341Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d046b5739c4c45d427aa00dfa1c9714d9bacd98026868c7d0966df0d45941d0a,Metadata:&PodSandboxMetadata{Name:kube-proxy-8xdzd,Uid:d0b6492d-c0f5-45dd-8482-c447b81daa66,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1714944906919368643,Labels:map[string]string{co
ntroller-revision-hash: 79cf874c65,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-8xdzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0b6492d-c0f5-45dd-8482-c447b81daa66,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-05T21:16:25.300221034Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a76a5d08be3f17280f3cdd8669a758c237119910ff913cbbf126b0f0cebbcc34,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:bc212ac3-7499-4edc-b5a5-622b0bd4a891,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1714944906880445304,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc212ac3-7499-4edc-b5a5-622b0bd4a891,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuratio
n: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-05-05T21:16:27.780013189Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:31aaa7707c94161ffa1d606eb29705948b6fc211a4c669ac70d332e7f4c6c5f8,Metadata:&PodSandboxMetadata{Name:etcd-ha-322980,Uid:58f12977082107510fdbb696cd218155,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:171494490683016803
3,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58f12977082107510fdbb696cd218155,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.178:2379,kubernetes.io/config.hash: 58f12977082107510fdbb696cd218155,kubernetes.io/config.seen: 2024-05-05T21:16:14.349637909Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e5113c6a2b38ab36594e7ba32f4d0fd039515a896d822ee1b3e183bad36ca309,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-322980,Uid:578ccf60a9d00c195d5069c63fb0b319,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1714944906826465854,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 578ccf60a9d00c195d5069c6
3fb0b319,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 578ccf60a9d00c195d5069c63fb0b319,kubernetes.io/config.seen: 2024-05-05T21:16:14.349643104Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e3d0f3523fcd003895abc8a674365bf48ff9c71feec1df3435b3b4071808e767,Metadata:&PodSandboxMetadata{Name:kindnet-lwtnx,Uid:4033535e-69f1-426c-bb17-831fad6336d5,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1714944906816032037,Labels:map[string]string{app: kindnet,controller-revision-hash: 64fdfd5c6d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-lwtnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4033535e-69f1-426c-bb17-831fad6336d5,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-05T21:16:25.298282035Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f2602411310b0a0a6ebb8339a9cfa4a64ed39717e590622b336bb3f71384b71c,Metadata:&Po
dSandboxMetadata{Name:kube-scheduler-ha-322980,Uid:c588feae7d6204945d27bedaf4541d64,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1714944906809949161,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c588feae7d6204945d27bedaf4541d64,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c588feae7d6204945d27bedaf4541d64,kubernetes.io/config.seen: 2024-05-05T21:16:14.349644366Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:89b1eb416984c61a9ee783cce71ac835f393c1c1dd30d6a85ae7ad723ca1fa3c,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-322980,Uid:25cdcec1c37ba86157b0b42297dfe2cf,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1714944906806079502,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-322980,io.kube
rnetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25cdcec1c37ba86157b0b42297dfe2cf,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.178:8443,kubernetes.io/config.hash: 25cdcec1c37ba86157b0b42297dfe2cf,kubernetes.io/config.seen: 2024-05-05T21:16:14.349641877Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=625c59df-4e77-4a1b-916b-6168369f8622 name=/runtime.v1.RuntimeService/ListPodSandbox
	May 05 21:44:12 ha-322980 crio[7013]: time="2024-05-05 21:44:12.388217337Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c2f505d7-ceb8-463a-b269-30cc79238d92 name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:44:12 ha-322980 crio[7013]: time="2024-05-05 21:44:12.388290747Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c2f505d7-ceb8-463a-b269-30cc79238d92 name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:44:12 ha-322980 crio[7013]: time="2024-05-05 21:44:12.389004844Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bd915c3dba22af90f7aed0f6dfd347efd77a422f8013f9e16206a6c3c7c717d0,PodSandboxId:e5113c6a2b38ab36594e7ba32f4d0fd039515a896d822ee1b3e183bad36ca309,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:5,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714945085395448724,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 578ccf60a9d00c195d5069c63fb0b319,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 5,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4339bd42993b7bc6c2ff073b40194cacc51c70b40a765dc5cbd4c0af2c755035,PodSandboxId:a76a5d08be3f17280f3cdd8669a758c237119910ff913cbbf126b0f0cebbcc34,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714945082394863382,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc212ac3-7499-4edc-b5a5-622b0bd4a891,},Annotations:map[string]string{io.kubernetes.container.hash: c207764,io.kubernetes.container.restartCount: 6,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:455a734814e0474d32e5de7b335134c299d8cc37e15593029ec33a9856671f2c,PodSandboxId:89b1eb416984c61a9ee783cce71ac835f393c1c1dd30d6a85ae7ad723ca1fa3c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:6,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714945015391427036,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25cdcec1c37ba86157b0b42297dfe2cf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf39325,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b68524c36b17be1b1024baa8244cb97f9821f0e32ef66ba49016dbc0a2ae5fee,PodSandboxId:e3d0f3523fcd003895abc8a674365bf48ff9c71feec1df3435b3b4071808e767,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:6,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714944952396569836,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lwtnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4033535e-69f1-426c-bb17-831fad6336d5,},Annotations:map[string]string{io.kubernetes.container.hash: 5400a271,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:745e34eb53f73dea3ae9ac6fb92e74e90c303e23b775e6f0d6bbb57966226631,PodSandboxId:0167f2cd9752c1404cdb331924609b7177cb5926ce734b74ae563c5a1f5324c1,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714944940760908908,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xt9l5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bbde9685-4494-40b7-bd53-9452fd970f5a,},Annotations:map[string]string{io.kubernetes.container.hash: ce6d6b7c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b25281bb34e5c1f921a226ad41b62e5d2d646c890c919211d4225eaae5b64858,PodSandboxId:bf00cf4e243bb5324a02582b8c2cb7d36f921c9f8624a2ccb6d02b2d07c5b74f,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714944920405194507,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fqt45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27bdadca-f49c-4f50-b09c-07dd6067f39a,},Annotations:map[string]string{io.kubernetes.container.hash: 8fa26f22,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"
containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0edfa4169296485a2f562827e26303e40cdeb2eab4169475486ceb06b3016b78,PodSandboxId:6f6df1fe2bd0f62b6c9b71112bc546237fb71ae77df14a6bf028d6b474494156,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714944908513152034,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4b10859196db0958fa2b1c992ad5e8a,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5
eb98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f29543fe60666ca79cdd859c65f4e012453caa34aec600bb348a902dc8bac60,PodSandboxId:d046b5739c4c45d427aa00dfa1c9714d9bacd98026868c7d0966df0d45941d0a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714944908851453754,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xdzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0b6492d-c0f5-45dd-8482-c447b81daa66,},Annotations:map[string]string{io.kubernetes.container.hash: 9b4684e6,io.kubernetes.container.rest
artCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36af6fb31f7f102488daa8a188158f589e68261fbcaddb9215e2e09d451bf266,PodSandboxId:624d55df9e644b6beffb63cc435323bbbf978f66c5779168395018eabbb1296d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714944907715159809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-78zmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e066e3ad-0574-44f9-acab-d7cec8b86788,},Annotations:map[string]string{io.kubernetes.container.hash: 3cdac550,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:036738ae8212722b43956751fe318ceda45cd3f7eee1b979e909231b72aa247c,PodSandboxId:31aaa7707c94161ffa1d606eb29705948b6fc211a4c669ac70d332e7f4c6c5f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714944907539612810,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 58f12977082107510fdbb696cd218155,},Annotations:map[string]string{io.kubernetes.container.hash: a7c6c285,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e5342fcbb6da2c858cf392ff9e1a2b3a5e9904303f6b5bc2e06929733f8c218,PodSandboxId:f2602411310b0a0a6ebb8339a9cfa4a64ed39717e590622b336bb3f71384b71c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714944907374401463,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c588feae7
d6204945d27bedaf4541d64,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c2f505d7-ceb8-463a-b269-30cc79238d92 name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:44:12 ha-322980 crio[7013]: time="2024-05-05 21:44:12.444542977Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=580f7cc7-5cc6-436a-b770-9cf87689df5d name=/runtime.v1.RuntimeService/Version
	May 05 21:44:12 ha-322980 crio[7013]: time="2024-05-05 21:44:12.444648244Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=580f7cc7-5cc6-436a-b770-9cf87689df5d name=/runtime.v1.RuntimeService/Version
	May 05 21:44:12 ha-322980 crio[7013]: time="2024-05-05 21:44:12.447046378Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=acd3962d-d9b8-4898-8ded-680fb86af5ca name=/runtime.v1.ImageService/ImageFsInfo
	May 05 21:44:12 ha-322980 crio[7013]: time="2024-05-05 21:44:12.447696494Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714945452447670497,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=acd3962d-d9b8-4898-8ded-680fb86af5ca name=/runtime.v1.ImageService/ImageFsInfo
	May 05 21:44:12 ha-322980 crio[7013]: time="2024-05-05 21:44:12.448833916Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=28be6863-a5e9-4f4f-9adc-a7269588745c name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:44:12 ha-322980 crio[7013]: time="2024-05-05 21:44:12.449006434Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=28be6863-a5e9-4f4f-9adc-a7269588745c name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:44:12 ha-322980 crio[7013]: time="2024-05-05 21:44:12.449461822Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bd915c3dba22af90f7aed0f6dfd347efd77a422f8013f9e16206a6c3c7c717d0,PodSandboxId:e5113c6a2b38ab36594e7ba32f4d0fd039515a896d822ee1b3e183bad36ca309,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:5,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714945085395448724,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 578ccf60a9d00c195d5069c63fb0b319,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 5,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4339bd42993b7bc6c2ff073b40194cacc51c70b40a765dc5cbd4c0af2c755035,PodSandboxId:a76a5d08be3f17280f3cdd8669a758c237119910ff913cbbf126b0f0cebbcc34,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714945082394863382,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc212ac3-7499-4edc-b5a5-622b0bd4a891,},Annotations:map[string]string{io.kubernetes.container.hash: c207764,io.kubernetes.container.restartCount: 6,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:455a734814e0474d32e5de7b335134c299d8cc37e15593029ec33a9856671f2c,PodSandboxId:89b1eb416984c61a9ee783cce71ac835f393c1c1dd30d6a85ae7ad723ca1fa3c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:6,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714945015391427036,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25cdcec1c37ba86157b0b42297dfe2cf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf39325,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de61fb39b57ed8a3eac293c7bf5d9a22d35b39c01d327216bcd936614ae3a3bc,PodSandboxId:e5113c6a2b38ab36594e7ba32f4d0fd039515a896d822ee1b3e183bad36ca309,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714944982392449199,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 578ccf60a9d00c195d5069c63fb0b319,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b68524c36b17be1b1024baa8244cb97f9821f0e32ef66ba49016dbc0a2ae5fee,PodSandboxId:e3d0f3523fcd003895abc8a674365bf48ff9c71feec1df3435b3b4071808e767,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:6,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714944952396569836,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lwtnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4033535e-69f1-426c-bb17-831fad6336d5,},Annotations:map[string]string{io.kubernetes.container.hash: 5400a271,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:745e34eb53f73dea3ae9ac6fb92e74e90c303e23b775e6f0d6bbb57966226631,PodSandboxId:0167f2cd9752c1404cdb331924609b7177cb5926ce734b74ae563c5a1f5324c1,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714944940760908908,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xt9l5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bbde9685-4494-40b7-bd53-9452fd970f5a,},Annotations:map[string]string{io.kubernetes.container.hash: ce6d6b7c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMe
ssagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b25281bb34e5c1f921a226ad41b62e5d2d646c890c919211d4225eaae5b64858,PodSandboxId:bf00cf4e243bb5324a02582b8c2cb7d36f921c9f8624a2ccb6d02b2d07c5b74f,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714944920405194507,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fqt45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27bdadca-f49c-4f50-b09c-07dd6067f39a,},Annotations:map[string]string{io.kubernetes.container.hash: 8fa26f22,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics
\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0edfa4169296485a2f562827e26303e40cdeb2eab4169475486ceb06b3016b78,PodSandboxId:6f6df1fe2bd0f62b6c9b71112bc546237fb71ae77df14a6bf028d6b474494156,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714944908513152034,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4b10859196db0958fa2b1c992ad5e8a,},Annotations:map[string]string{io.kubernetes.container.hash:
d7e5eb98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f29543fe60666ca79cdd859c65f4e012453caa34aec600bb348a902dc8bac60,PodSandboxId:d046b5739c4c45d427aa00dfa1c9714d9bacd98026868c7d0966df0d45941d0a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714944908851453754,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xdzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0b6492d-c0f5-45dd-8482-c447b81daa66,},Annotations:map[string]string{io.kubernetes.container.hash: 9b4684e6,io.kubernetes.container
.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b65a4994eb530985ae22b546d28c337201558888bd723bf7f7a07c3e2f787aa,PodSandboxId:a76a5d08be3f17280f3cdd8669a758c237119910ff913cbbf126b0f0cebbcc34,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714944907501240294,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc212ac3-7499-4edc-b5a5-622b0bd4a891,},Annotations:map[string]string{io.kubernetes.container.hash: c207764,io.kubernetes.container.restartCount:
5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36af6fb31f7f102488daa8a188158f589e68261fbcaddb9215e2e09d451bf266,PodSandboxId:624d55df9e644b6beffb63cc435323bbbf978f66c5779168395018eabbb1296d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714944907715159809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-78zmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e066e3ad-0574-44f9-acab-d7cec8b86788,},Annotations:map[string]string{io.kubernetes.container.hash: 3cdac550,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:036738ae8212722b43956751fe318ceda45cd3f7eee1b979e909231b72aa247c,PodSandboxId:31aaa7707c94161ffa1d606eb29705948b6fc211a4c669ac70d332e7f4c6c5f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714944907539612810,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 58f12977082107510fdbb696cd218155,},Annotations:map[string]string{io.kubernetes.container.hash: a7c6c285,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e5342fcbb6da2c858cf392ff9e1a2b3a5e9904303f6b5bc2e06929733f8c218,PodSandboxId:f2602411310b0a0a6ebb8339a9cfa4a64ed39717e590622b336bb3f71384b71c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714944907374401463,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c588feae7d6204945d
27bedaf4541d64,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3fb541b398164615ee73073fd074fad2f8e633b76127a0e52466c93d4bdf158,PodSandboxId:89b1eb416984c61a9ee783cce71ac835f393c1c1dd30d6a85ae7ad723ca1fa3c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:5,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714944907239170716,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25cdcec1c37ba86157b0b42297dfe2cf,},Annot
ations:map[string]string{io.kubernetes.container.hash: cdf39325,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1b7497414f05d6ff38a79f45c12a13a66babda238cdd3d29d3259da5fc595e8,PodSandboxId:36c9090a9259da22fc8a3a1cc16797c2b794c189f648ff411c22d72726ab2ad6,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714944887526154268,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fqt45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27bdadca-f49c-4f50-b09c-07dd6067f39a,},Annotations:map[string]string{io.kube
rnetes.container.hash: 8fa26f22,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:355a3bf6a6f151e9f26eb348a08353e45581aa4cee3a2aa154b9e12ee27a638c,PodSandboxId:64801e377a379e3032b60fb4e259d498edbd40bb44d06239b6567a8489f97eef,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:5,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714944741390863157,Labels:map[string]string{io.kubernetes.container.name:
kindnet-cni,io.kubernetes.pod.name: kindnet-lwtnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4033535e-69f1-426c-bb17-831fad6336d5,},Annotations:map[string]string{io.kubernetes.container.hash: 5400a271,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b1fc54998e0d9f8e8908df7b252d9046c75539a9f8a7b3052965e4689c68bb7,PodSandboxId:8e6a479fdea9d5e48225f93fa886dde1f176a498a35aebb5d75e093f2ca98e92,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1714944630237627038,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube
-vip-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4b10859196db0958fa2b1c992ad5e8a,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:378349efe1d23552d4480f47eb5b8f2985a9b9afe0582e8a15ae7bd295575d7f,PodSandboxId:9dfb38e6022a77884fc7f07c6d48d0ba1ff1031b9546850ba613c1a67c6eefa0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714944545718531570,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xt9l5,io.kubernetes.pod.name
space: default,io.kubernetes.pod.uid: bbde9685-4494-40b7-bd53-9452fd970f5a,},Annotations:map[string]string{io.kubernetes.container.hash: ce6d6b7c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:852f56752c6434b6c374dd0257366bf739e210aee07f80277d63776fd528299b,PodSandboxId:e36e99eaa4a61ccc966411e5a4efcee570acc74235b8c9da766e66844f3e432c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714944512450643891,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xdzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: d0b6492d-c0f5-45dd-8482-c447b81daa66,},Annotations:map[string]string{io.kubernetes.container.hash: 9b4684e6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:858ab02f25618649e61d3cb75cf0fbd9a3bfbbbd2a9556b0257647ec91b0c2b8,PodSandboxId:cd2a674999e8ab59ec8a5f057e40ce67c67724010b62a201c6abe9e71f6a3a30,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714944512601137699,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-78zmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e066e3ad-0574-44f9-acab-d7ce
c8b86788,},Annotations:map[string]string{io.kubernetes.container.hash: 3cdac550,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d864b4fda0bb945209c47a2404e8a012f5dac000a178c964de8c1e8cc8cb9a9f,PodSandboxId:4777f05174b29b55270c89c82d4189567c9b871410dfc1754a18984bcfbff1a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714944512361323713,Lab
els:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c588feae7d6204945d27bedaf4541d64,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:366a7799ffc65796c54bbea26b2f81953df8a6b0fd8399e51c1f3cf1f374ff2a,PodSandboxId:55b2bc86d17b354ac6e14cdd0acbe830b855d4ee17bd24922dff0f39203830a4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714944512349368052,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58f12977082107510fdbb696cd218155,},Annotations:map[string]string{io.kubernetes.container.hash: a7c6c285,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=28be6863-a5e9-4f4f-9adc-a7269588745c name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:44:12 ha-322980 crio[7013]: time="2024-05-05 21:44:12.500899479Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=75e715c9-8f16-465a-ab45-58c645a31818 name=/runtime.v1.RuntimeService/Version
	May 05 21:44:12 ha-322980 crio[7013]: time="2024-05-05 21:44:12.501056798Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=75e715c9-8f16-465a-ab45-58c645a31818 name=/runtime.v1.RuntimeService/Version
	May 05 21:44:12 ha-322980 crio[7013]: time="2024-05-05 21:44:12.503573795Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=df47d894-d742-4c80-81bb-a210959c700f name=/runtime.v1.ImageService/ImageFsInfo
	May 05 21:44:12 ha-322980 crio[7013]: time="2024-05-05 21:44:12.504072800Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714945452504044692,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=df47d894-d742-4c80-81bb-a210959c700f name=/runtime.v1.ImageService/ImageFsInfo
	May 05 21:44:12 ha-322980 crio[7013]: time="2024-05-05 21:44:12.504831950Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=584ffedb-30df-4bd6-aced-9e0d2ddc5407 name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:44:12 ha-322980 crio[7013]: time="2024-05-05 21:44:12.504893494Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=584ffedb-30df-4bd6-aced-9e0d2ddc5407 name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:44:12 ha-322980 crio[7013]: time="2024-05-05 21:44:12.505311724Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bd915c3dba22af90f7aed0f6dfd347efd77a422f8013f9e16206a6c3c7c717d0,PodSandboxId:e5113c6a2b38ab36594e7ba32f4d0fd039515a896d822ee1b3e183bad36ca309,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:5,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714945085395448724,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 578ccf60a9d00c195d5069c63fb0b319,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 5,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4339bd42993b7bc6c2ff073b40194cacc51c70b40a765dc5cbd4c0af2c755035,PodSandboxId:a76a5d08be3f17280f3cdd8669a758c237119910ff913cbbf126b0f0cebbcc34,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714945082394863382,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc212ac3-7499-4edc-b5a5-622b0bd4a891,},Annotations:map[string]string{io.kubernetes.container.hash: c207764,io.kubernetes.container.restartCount: 6,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:455a734814e0474d32e5de7b335134c299d8cc37e15593029ec33a9856671f2c,PodSandboxId:89b1eb416984c61a9ee783cce71ac835f393c1c1dd30d6a85ae7ad723ca1fa3c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:6,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714945015391427036,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25cdcec1c37ba86157b0b42297dfe2cf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf39325,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de61fb39b57ed8a3eac293c7bf5d9a22d35b39c01d327216bcd936614ae3a3bc,PodSandboxId:e5113c6a2b38ab36594e7ba32f4d0fd039515a896d822ee1b3e183bad36ca309,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714944982392449199,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 578ccf60a9d00c195d5069c63fb0b319,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b68524c36b17be1b1024baa8244cb97f9821f0e32ef66ba49016dbc0a2ae5fee,PodSandboxId:e3d0f3523fcd003895abc8a674365bf48ff9c71feec1df3435b3b4071808e767,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:6,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714944952396569836,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lwtnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4033535e-69f1-426c-bb17-831fad6336d5,},Annotations:map[string]string{io.kubernetes.container.hash: 5400a271,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:745e34eb53f73dea3ae9ac6fb92e74e90c303e23b775e6f0d6bbb57966226631,PodSandboxId:0167f2cd9752c1404cdb331924609b7177cb5926ce734b74ae563c5a1f5324c1,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714944940760908908,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xt9l5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bbde9685-4494-40b7-bd53-9452fd970f5a,},Annotations:map[string]string{io.kubernetes.container.hash: ce6d6b7c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMe
ssagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b25281bb34e5c1f921a226ad41b62e5d2d646c890c919211d4225eaae5b64858,PodSandboxId:bf00cf4e243bb5324a02582b8c2cb7d36f921c9f8624a2ccb6d02b2d07c5b74f,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714944920405194507,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fqt45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27bdadca-f49c-4f50-b09c-07dd6067f39a,},Annotations:map[string]string{io.kubernetes.container.hash: 8fa26f22,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics
\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0edfa4169296485a2f562827e26303e40cdeb2eab4169475486ceb06b3016b78,PodSandboxId:6f6df1fe2bd0f62b6c9b71112bc546237fb71ae77df14a6bf028d6b474494156,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714944908513152034,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4b10859196db0958fa2b1c992ad5e8a,},Annotations:map[string]string{io.kubernetes.container.hash:
d7e5eb98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f29543fe60666ca79cdd859c65f4e012453caa34aec600bb348a902dc8bac60,PodSandboxId:d046b5739c4c45d427aa00dfa1c9714d9bacd98026868c7d0966df0d45941d0a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714944908851453754,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xdzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0b6492d-c0f5-45dd-8482-c447b81daa66,},Annotations:map[string]string{io.kubernetes.container.hash: 9b4684e6,io.kubernetes.container
.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b65a4994eb530985ae22b546d28c337201558888bd723bf7f7a07c3e2f787aa,PodSandboxId:a76a5d08be3f17280f3cdd8669a758c237119910ff913cbbf126b0f0cebbcc34,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714944907501240294,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc212ac3-7499-4edc-b5a5-622b0bd4a891,},Annotations:map[string]string{io.kubernetes.container.hash: c207764,io.kubernetes.container.restartCount:
5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36af6fb31f7f102488daa8a188158f589e68261fbcaddb9215e2e09d451bf266,PodSandboxId:624d55df9e644b6beffb63cc435323bbbf978f66c5779168395018eabbb1296d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714944907715159809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-78zmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e066e3ad-0574-44f9-acab-d7cec8b86788,},Annotations:map[string]string{io.kubernetes.container.hash: 3cdac550,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:036738ae8212722b43956751fe318ceda45cd3f7eee1b979e909231b72aa247c,PodSandboxId:31aaa7707c94161ffa1d606eb29705948b6fc211a4c669ac70d332e7f4c6c5f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714944907539612810,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 58f12977082107510fdbb696cd218155,},Annotations:map[string]string{io.kubernetes.container.hash: a7c6c285,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e5342fcbb6da2c858cf392ff9e1a2b3a5e9904303f6b5bc2e06929733f8c218,PodSandboxId:f2602411310b0a0a6ebb8339a9cfa4a64ed39717e590622b336bb3f71384b71c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714944907374401463,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c588feae7d6204945d
27bedaf4541d64,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3fb541b398164615ee73073fd074fad2f8e633b76127a0e52466c93d4bdf158,PodSandboxId:89b1eb416984c61a9ee783cce71ac835f393c1c1dd30d6a85ae7ad723ca1fa3c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:5,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714944907239170716,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25cdcec1c37ba86157b0b42297dfe2cf,},Annot
ations:map[string]string{io.kubernetes.container.hash: cdf39325,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1b7497414f05d6ff38a79f45c12a13a66babda238cdd3d29d3259da5fc595e8,PodSandboxId:36c9090a9259da22fc8a3a1cc16797c2b794c189f648ff411c22d72726ab2ad6,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714944887526154268,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fqt45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27bdadca-f49c-4f50-b09c-07dd6067f39a,},Annotations:map[string]string{io.kube
rnetes.container.hash: 8fa26f22,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:355a3bf6a6f151e9f26eb348a08353e45581aa4cee3a2aa154b9e12ee27a638c,PodSandboxId:64801e377a379e3032b60fb4e259d498edbd40bb44d06239b6567a8489f97eef,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:5,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714944741390863157,Labels:map[string]string{io.kubernetes.container.name:
kindnet-cni,io.kubernetes.pod.name: kindnet-lwtnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4033535e-69f1-426c-bb17-831fad6336d5,},Annotations:map[string]string{io.kubernetes.container.hash: 5400a271,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b1fc54998e0d9f8e8908df7b252d9046c75539a9f8a7b3052965e4689c68bb7,PodSandboxId:8e6a479fdea9d5e48225f93fa886dde1f176a498a35aebb5d75e093f2ca98e92,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1714944630237627038,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube
-vip-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4b10859196db0958fa2b1c992ad5e8a,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:378349efe1d23552d4480f47eb5b8f2985a9b9afe0582e8a15ae7bd295575d7f,PodSandboxId:9dfb38e6022a77884fc7f07c6d48d0ba1ff1031b9546850ba613c1a67c6eefa0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714944545718531570,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xt9l5,io.kubernetes.pod.name
space: default,io.kubernetes.pod.uid: bbde9685-4494-40b7-bd53-9452fd970f5a,},Annotations:map[string]string{io.kubernetes.container.hash: ce6d6b7c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:852f56752c6434b6c374dd0257366bf739e210aee07f80277d63776fd528299b,PodSandboxId:e36e99eaa4a61ccc966411e5a4efcee570acc74235b8c9da766e66844f3e432c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714944512450643891,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xdzd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: d0b6492d-c0f5-45dd-8482-c447b81daa66,},Annotations:map[string]string{io.kubernetes.container.hash: 9b4684e6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:858ab02f25618649e61d3cb75cf0fbd9a3bfbbbd2a9556b0257647ec91b0c2b8,PodSandboxId:cd2a674999e8ab59ec8a5f057e40ce67c67724010b62a201c6abe9e71f6a3a30,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714944512601137699,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-78zmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e066e3ad-0574-44f9-acab-d7ce
c8b86788,},Annotations:map[string]string{io.kubernetes.container.hash: 3cdac550,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d864b4fda0bb945209c47a2404e8a012f5dac000a178c964de8c1e8cc8cb9a9f,PodSandboxId:4777f05174b29b55270c89c82d4189567c9b871410dfc1754a18984bcfbff1a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714944512361323713,Lab
els:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c588feae7d6204945d27bedaf4541d64,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:366a7799ffc65796c54bbea26b2f81953df8a6b0fd8399e51c1f3cf1f374ff2a,PodSandboxId:55b2bc86d17b354ac6e14cdd0acbe830b855d4ee17bd24922dff0f39203830a4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714944512349368052,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-322980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58f12977082107510fdbb696cd218155,},Annotations:map[string]string{io.kubernetes.container.hash: a7c6c285,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=584ffedb-30df-4bd6-aced-9e0d2ddc5407 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	bd915c3dba22a       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   6 minutes ago       Running             kube-controller-manager   5                   e5113c6a2b38a       kube-controller-manager-ha-322980
	4339bd42993b7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   6 minutes ago       Running             storage-provisioner       6                   a76a5d08be3f1       storage-provisioner
	455a734814e04       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   7 minutes ago       Running             kube-apiserver            6                   89b1eb416984c       kube-apiserver-ha-322980
	de61fb39b57ed       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   7 minutes ago       Exited              kube-controller-manager   4                   e5113c6a2b38a       kube-controller-manager-ha-322980
	b68524c36b17b       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5   8 minutes ago       Running             kindnet-cni               6                   e3d0f3523fcd0       kindnet-lwtnx
	745e34eb53f73       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   8 minutes ago       Running             busybox                   2                   0167f2cd9752c       busybox-fc5497c4f-xt9l5
	b25281bb34e5c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   8 minutes ago       Running             coredns                   3                   bf00cf4e243bb       coredns-7db6d8ff4d-fqt45
	7f29543fe6066       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   9 minutes ago       Running             kube-proxy                2                   d046b5739c4c4       kube-proxy-8xdzd
	0edfa41692964       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba   9 minutes ago       Running             kube-vip                  2                   6f6df1fe2bd0f       kube-vip-ha-322980
	36af6fb31f7f1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   2                   624d55df9e644       coredns-7db6d8ff4d-78zmw
	036738ae82127       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   31aaa7707c941       etcd-ha-322980
	9b65a4994eb53       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Exited              storage-provisioner       5                   a76a5d08be3f1       storage-provisioner
	6e5342fcbb6da       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   9 minutes ago       Running             kube-scheduler            2                   f2602411310b0       kube-scheduler-ha-322980
	e3fb541b39816       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   9 minutes ago       Exited              kube-apiserver            5                   89b1eb416984c       kube-apiserver-ha-322980
	f1b7497414f05       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Exited              coredns                   2                   36c9090a9259d       coredns-7db6d8ff4d-fqt45
	355a3bf6a6f15       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5   11 minutes ago      Exited              kindnet-cni               5                   64801e377a379       kindnet-lwtnx
	8b1fc54998e0d       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba   13 minutes ago      Exited              kube-vip                  1                   8e6a479fdea9d       kube-vip-ha-322980
	378349efe1d23       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   15 minutes ago      Exited              busybox                   1                   9dfb38e6022a7       busybox-fc5497c4f-xt9l5
	858ab02f25618       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Exited              coredns                   1                   cd2a674999e8a       coredns-7db6d8ff4d-78zmw
	852f56752c643       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   15 minutes ago      Exited              kube-proxy                1                   e36e99eaa4a61       kube-proxy-8xdzd
	d864b4fda0bb9       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   15 minutes ago      Exited              kube-scheduler            1                   4777f05174b29       kube-scheduler-ha-322980
	366a7799ffc65       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   15 minutes ago      Exited              etcd                      1                   55b2bc86d17b3       etcd-ha-322980
	
	
	==> coredns [36af6fb31f7f102488daa8a188158f589e68261fbcaddb9215e2e09d451bf266] <==
	Trace[901248357]: [10.005837518s] [10.005837518s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[2076603121]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (05-May-2024 21:35:12.699) (total time: 10001ms):
	Trace[2076603121]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (21:35:22.701)
	Trace[2076603121]: [10.001656548s] [10.001656548s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.10:55464->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.10:55464->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.10:55470->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.10:55470->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [858ab02f25618649e61d3cb75cf0fbd9a3bfbbbd2a9556b0257647ec91b0c2b8] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: Trace[2094402806]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (05-May-2024 21:32:42.233) (total time: 10949ms):
	Trace[2094402806]: ---"Objects listed" error:Unauthorized 10949ms (21:32:53.182)
	Trace[2094402806]: [10.949609309s] [10.949609309s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[718903134]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (05-May-2024 21:32:41.764) (total time: 11420ms):
	Trace[718903134]: ---"Objects listed" error:Unauthorized 11419ms (21:32:53.184)
	Trace[718903134]: [11.42063193s] [11.42063193s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[717541916]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (05-May-2024 21:32:56.427) (total time: 10831ms):
	Trace[717541916]: ---"Objects listed" error:Unauthorized 10831ms (21:33:07.259)
	Trace[717541916]: [10.83175781s] [10.83175781s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[ERROR] plugin/kubernetes: Unexpected error when reading response body: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: unexpected error when reading response body. Please retry. Original error: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: Trace[1786829321]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (05-May-2024 21:32:56.656) (total time: 10617ms):
	Trace[1786829321]: ---"Objects listed" error:unexpected error when reading response body. Please retry. Original error: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug="" 10617ms (21:33:07.274)
	Trace[1786829321]: [10.617187248s] [10.617187248s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: unexpected error when reading response body. Please retry. Original error: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b25281bb34e5c1f921a226ad41b62e5d2d646c890c919211d4225eaae5b64858] <==
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:43640 - 9075 "HINFO IN 7837172191638095922.4344800403725060409. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014108425s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.9:48472->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.9:48472->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.9:48458->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.9:48458->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.9:48442->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.9:48442->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [f1b7497414f05d6ff38a79f45c12a13a66babda238cdd3d29d3259da5fc595e8] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:35207 - 62071 "HINFO IN 6403860514823110407.7424276429702029975. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015136049s
	
	
	==> describe nodes <==
	Name:               ha-322980
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-322980
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=182cbbc99574885c654f8e32902368a71f76ddd3
	                    minikube.k8s.io/name=ha-322980
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_05T21_16_15_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 05 May 2024 21:16:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-322980
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 05 May 2024 21:44:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 05 May 2024 21:41:00 +0000   Sun, 05 May 2024 21:16:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 05 May 2024 21:41:00 +0000   Sun, 05 May 2024 21:16:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 05 May 2024 21:41:00 +0000   Sun, 05 May 2024 21:16:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 05 May 2024 21:41:00 +0000   Sun, 05 May 2024 21:16:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.178
	  Hostname:    ha-322980
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3a019ec328ab467ca04365748baaa367
	  System UUID:                3a019ec3-28ab-467c-a043-65748baaa367
	  Boot ID:                    c9018f9a-79b9-43c5-a307-9ae120187dfb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xt9l5              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 coredns-7db6d8ff4d-78zmw             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 coredns-7db6d8ff4d-fqt45             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 etcd-ha-322980                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 kindnet-lwtnx                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      27m
	  kube-system                 kube-apiserver-ha-322980             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-controller-manager-ha-322980    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-8xdzd                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-scheduler-ha-322980             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-vip-ha-322980                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 27m                  kube-proxy       
	  Normal   Starting                 8m19s                kube-proxy       
	  Normal   Starting                 14m                  kube-proxy       
	  Normal   NodeAllocatableEnforced  27m                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    27m                  kubelet          Node ha-322980 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     27m                  kubelet          Node ha-322980 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  27m                  kubelet          Node ha-322980 status is now: NodeHasSufficientMemory
	  Normal   Starting                 27m                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           27m                  node-controller  Node ha-322980 event: Registered Node ha-322980 in Controller
	  Normal   NodeReady                27m                  kubelet          Node ha-322980 status is now: NodeReady
	  Normal   RegisteredNode           25m                  node-controller  Node ha-322980 event: Registered Node ha-322980 in Controller
	  Normal   RegisteredNode           24m                  node-controller  Node ha-322980 event: Registered Node ha-322980 in Controller
	  Normal   RegisteredNode           14m                  node-controller  Node ha-322980 event: Registered Node ha-322980 in Controller
	  Normal   RegisteredNode           14m                  node-controller  Node ha-322980 event: Registered Node ha-322980 in Controller
	  Warning  ContainerGCFailed        9m59s (x4 over 16m)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           8m16s                node-controller  Node ha-322980 event: Registered Node ha-322980 in Controller
	  Normal   RegisteredNode           5m56s                node-controller  Node ha-322980 event: Registered Node ha-322980 in Controller
	
	
	Name:               ha-322980-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-322980-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=182cbbc99574885c654f8e32902368a71f76ddd3
	                    minikube.k8s.io/name=ha-322980
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_05T21_18_15_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 05 May 2024 21:18:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-322980-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 05 May 2024 21:44:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 05 May 2024 21:41:40 +0000   Sun, 05 May 2024 21:29:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 05 May 2024 21:41:40 +0000   Sun, 05 May 2024 21:29:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 05 May 2024 21:41:40 +0000   Sun, 05 May 2024 21:29:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 05 May 2024 21:41:40 +0000   Sun, 05 May 2024 21:36:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.228
	  Hostname:    ha-322980-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c5d1651406694de39b61eff245fccb61
	  System UUID:                c5d16514-0669-4de3-9b61-eff245fccb61
	  Boot ID:                    706eb75e-c582-45ff-8a49-2eb787764614
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-tbmdd                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 etcd-ha-322980-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         25m
	  kube-system                 kindnet-lmgkm                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      26m
	  kube-system                 kube-apiserver-ha-322980-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  kube-system                 kube-controller-manager-ha-322980-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  kube-system                 kube-proxy-wbf7q                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-scheduler-ha-322980-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  kube-system                 kube-vip-ha-322980-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 14m                    kube-proxy       
	  Normal  Starting                 25m                    kube-proxy       
	  Normal  Starting                 7m53s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  26m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  26m (x8 over 26m)      kubelet          Node ha-322980-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26m (x8 over 26m)      kubelet          Node ha-322980-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26m (x7 over 26m)      kubelet          Node ha-322980-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25m                    node-controller  Node ha-322980-m02 event: Registered Node ha-322980-m02 in Controller
	  Normal  RegisteredNode           25m                    node-controller  Node ha-322980-m02 event: Registered Node ha-322980-m02 in Controller
	  Normal  RegisteredNode           24m                    node-controller  Node ha-322980-m02 event: Registered Node ha-322980-m02 in Controller
	  Normal  NodeNotReady             22m                    node-controller  Node ha-322980-m02 status is now: NodeNotReady
	  Normal  Starting                 15m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-322980-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-322980-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-322980-m02 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           14m                    node-controller  Node ha-322980-m02 event: Registered Node ha-322980-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-322980-m02 event: Registered Node ha-322980-m02 in Controller
	  Normal  Starting                 8m47s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m47s (x9 over 8m47s)  kubelet          Node ha-322980-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m47s (x7 over 8m47s)  kubelet          Node ha-322980-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m47s (x7 over 8m47s)  kubelet          Node ha-322980-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8m16s                  node-controller  Node ha-322980-m02 event: Registered Node ha-322980-m02 in Controller
	  Normal  RegisteredNode           5m56s                  node-controller  Node ha-322980-m02 event: Registered Node ha-322980-m02 in Controller
	
	
	Name:               ha-322980-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-322980-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=182cbbc99574885c654f8e32902368a71f76ddd3
	                    minikube.k8s.io/name=ha-322980
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_05T21_20_29_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 05 May 2024 21:20:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-322980-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 05 May 2024 21:44:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 05 May 2024 21:39:23 +0000   Sun, 05 May 2024 21:38:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 05 May 2024 21:39:23 +0000   Sun, 05 May 2024 21:38:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 05 May 2024 21:39:23 +0000   Sun, 05 May 2024 21:38:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 05 May 2024 21:39:23 +0000   Sun, 05 May 2024 21:38:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.169
	  Hostname:    ha-322980-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a4c8db3356b24ba197e491501ddbfd49
	  System UUID:                a4c8db33-56b2-4ba1-97e4-91501ddbfd49
	  Boot ID:                    089032cd-ad33-4e64-aad4-91f1550a6533
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-2klvr    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-nnc4q              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      23m
	  kube-system                 kube-proxy-68cxr           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 23m                    kube-proxy       
	  Normal   Starting                 5m17s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  23m (x3 over 23m)      kubelet          Node ha-322980-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    23m (x3 over 23m)      kubelet          Node ha-322980-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     23m (x3 over 23m)      kubelet          Node ha-322980-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  23m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           23m                    node-controller  Node ha-322980-m04 event: Registered Node ha-322980-m04 in Controller
	  Normal   RegisteredNode           23m                    node-controller  Node ha-322980-m04 event: Registered Node ha-322980-m04 in Controller
	  Normal   RegisteredNode           23m                    node-controller  Node ha-322980-m04 event: Registered Node ha-322980-m04 in Controller
	  Normal   NodeReady                23m                    kubelet          Node ha-322980-m04 status is now: NodeReady
	  Normal   RegisteredNode           14m                    node-controller  Node ha-322980-m04 event: Registered Node ha-322980-m04 in Controller
	  Normal   RegisteredNode           14m                    node-controller  Node ha-322980-m04 event: Registered Node ha-322980-m04 in Controller
	  Normal   NodeNotReady             14m                    node-controller  Node ha-322980-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           8m16s                  node-controller  Node ha-322980-m04 event: Registered Node ha-322980-m04 in Controller
	  Normal   RegisteredNode           5m56s                  node-controller  Node ha-322980-m04 event: Registered Node ha-322980-m04 in Controller
	  Normal   Starting                 5m21s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  5m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  5m21s (x3 over 5m21s)  kubelet          Node ha-322980-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m21s (x3 over 5m21s)  kubelet          Node ha-322980-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m21s (x3 over 5m21s)  kubelet          Node ha-322980-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 5m21s (x2 over 5m21s)  kubelet          Node ha-322980-m04 has been rebooted, boot id: 089032cd-ad33-4e64-aad4-91f1550a6533
	  Normal   NodeReady                5m21s (x2 over 5m21s)  kubelet          Node ha-322980-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.935027] kauditd_printk_skb: 57 callbacks suppressed
	[  +9.150561] systemd-fstab-generator[1376]: Ignoring "noauto" option for root device
	[  +0.089537] kauditd_printk_skb: 40 callbacks suppressed
	[ +11.653864] kauditd_printk_skb: 21 callbacks suppressed
	[May 5 21:18] kauditd_printk_skb: 74 callbacks suppressed
	[May 5 21:28] systemd-fstab-generator[3803]: Ignoring "noauto" option for root device
	[  +0.163899] systemd-fstab-generator[3815]: Ignoring "noauto" option for root device
	[  +0.174337] systemd-fstab-generator[3829]: Ignoring "noauto" option for root device
	[  +0.161075] systemd-fstab-generator[3841]: Ignoring "noauto" option for root device
	[  +0.301050] systemd-fstab-generator[3869]: Ignoring "noauto" option for root device
	[  +0.856611] systemd-fstab-generator[3972]: Ignoring "noauto" option for root device
	[  +4.601668] kauditd_printk_skb: 122 callbacks suppressed
	[ +12.029599] kauditd_printk_skb: 86 callbacks suppressed
	[ +11.080916] kauditd_printk_skb: 1 callbacks suppressed
	[May 5 21:29] kauditd_printk_skb: 1 callbacks suppressed
	[  +8.083309] kauditd_printk_skb: 5 callbacks suppressed
	[May 5 21:34] systemd-fstab-generator[6802]: Ignoring "noauto" option for root device
	[  +0.182906] systemd-fstab-generator[6815]: Ignoring "noauto" option for root device
	[  +0.268133] systemd-fstab-generator[6902]: Ignoring "noauto" option for root device
	[  +0.198648] systemd-fstab-generator[6954]: Ignoring "noauto" option for root device
	[  +0.303711] systemd-fstab-generator[6982]: Ignoring "noauto" option for root device
	[May 5 21:35] systemd-fstab-generator[7126]: Ignoring "noauto" option for root device
	[  +0.100681] kauditd_printk_skb: 110 callbacks suppressed
	[  +5.062735] kauditd_printk_skb: 56 callbacks suppressed
	[  +7.159563] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [036738ae8212722b43956751fe318ceda45cd3f7eee1b979e909231b72aa247c] <==
	{"level":"warn","ts":"2024-05-05T21:44:12.376003Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.29:2380/version","remote-member-id":"8c95a24aec1a1ea5","error":"Get \"https://192.168.39.29:2380/version\": dial tcp 192.168.39.29:2380: i/o timeout"}
	{"level":"warn","ts":"2024-05-05T21:44:12.376047Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"8c95a24aec1a1ea5","error":"Get \"https://192.168.39.29:2380/version\": dial tcp 192.168.39.29:2380: i/o timeout"}
	{"level":"warn","ts":"2024-05-05T21:44:12.439993Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"8c95a24aec1a1ea5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:44:12.745632Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"8c95a24aec1a1ea5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:44:12.840386Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"8c95a24aec1a1ea5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:44:12.874504Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"8c95a24aec1a1ea5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:44:12.932306Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"8c95a24aec1a1ea5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:44:12.939988Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"8c95a24aec1a1ea5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:44:12.940342Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"8c95a24aec1a1ea5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:44:12.945001Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"8c95a24aec1a1ea5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:44:12.965228Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"8c95a24aec1a1ea5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:44:12.988714Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"8c95a24aec1a1ea5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:44:12.999354Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"8c95a24aec1a1ea5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:44:13.004167Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"8c95a24aec1a1ea5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:44:13.008045Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"8c95a24aec1a1ea5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:44:13.01812Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"8c95a24aec1a1ea5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:44:13.027304Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"8c95a24aec1a1ea5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:44:13.037243Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"8c95a24aec1a1ea5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:44:13.04014Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"8c95a24aec1a1ea5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:44:13.040931Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"8c95a24aec1a1ea5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:44:13.044558Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"8c95a24aec1a1ea5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:44:13.051978Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"8c95a24aec1a1ea5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:44:13.066375Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"8c95a24aec1a1ea5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:44:13.075505Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"8c95a24aec1a1ea5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:44:13.140471Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dced536bf07718ca","from":"dced536bf07718ca","remote-peer-id":"8c95a24aec1a1ea5","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> etcd [366a7799ffc65796c54bbea26b2f81953df8a6b0fd8399e51c1f3cf1f374ff2a] <==
	{"level":"warn","ts":"2024-05-05T21:33:11.176743Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":1786397753024494759,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-05-05T21:33:11.658959Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-05-05T21:33:11.659039Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"ha-322980","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.178:2380"],"advertise-client-urls":["https://192.168.39.178:2379"]}
	{"level":"warn","ts":"2024-05-05T21:33:11.659188Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-05T21:33:11.659217Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-05T21:33:11.667585Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.178:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-05T21:33:11.667621Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.178:2379: use of closed network connection"}
	{"level":"info","ts":"2024-05-05T21:33:11.667673Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"dced536bf07718ca","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-05-05T21:33:11.670015Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"a1efc654ffe9f445"}
	{"level":"info","ts":"2024-05-05T21:33:11.670051Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"a1efc654ffe9f445"}
	{"level":"info","ts":"2024-05-05T21:33:11.670085Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"a1efc654ffe9f445"}
	{"level":"info","ts":"2024-05-05T21:33:11.670167Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445"}
	{"level":"info","ts":"2024-05-05T21:33:11.670222Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445"}
	{"level":"info","ts":"2024-05-05T21:33:11.670259Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"dced536bf07718ca","remote-peer-id":"a1efc654ffe9f445"}
	{"level":"info","ts":"2024-05-05T21:33:11.670268Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"a1efc654ffe9f445"}
	{"level":"info","ts":"2024-05-05T21:33:11.670308Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"8c95a24aec1a1ea5"}
	{"level":"info","ts":"2024-05-05T21:33:11.670321Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"unknown stream","remote-peer-id":"8c95a24aec1a1ea5"}
	{"level":"info","ts":"2024-05-05T21:33:11.670663Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"unknown stream","remote-peer-id":"8c95a24aec1a1ea5"}
	{"level":"info","ts":"2024-05-05T21:33:11.671094Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"dced536bf07718ca","remote-peer-id":"8c95a24aec1a1ea5"}
	{"level":"info","ts":"2024-05-05T21:33:11.671138Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"dced536bf07718ca","remote-peer-id":"8c95a24aec1a1ea5"}
	{"level":"info","ts":"2024-05-05T21:33:11.671167Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"dced536bf07718ca","remote-peer-id":"8c95a24aec1a1ea5"}
	{"level":"info","ts":"2024-05-05T21:33:11.671177Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"8c95a24aec1a1ea5"}
	{"level":"info","ts":"2024-05-05T21:33:11.683544Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.178:2380"}
	{"level":"info","ts":"2024-05-05T21:33:11.683651Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.178:2380"}
	{"level":"info","ts":"2024-05-05T21:33:11.683659Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-322980","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.178:2380"],"advertise-client-urls":["https://192.168.39.178:2379"]}
	
	
	==> kernel <==
	 21:44:13 up 28 min,  0 users,  load average: 0.35, 0.42, 0.44
	Linux ha-322980 5.10.207 #1 SMP Tue Apr 30 22:38:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [355a3bf6a6f151e9f26eb348a08353e45581aa4cee3a2aa154b9e12ee27a638c] <==
	I0505 21:32:21.859857       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0505 21:32:21.859939       1 main.go:107] hostIP = 192.168.39.178
	podIP = 192.168.39.178
	I0505 21:32:21.860106       1 main.go:116] setting mtu 1500 for CNI 
	I0505 21:32:21.860127       1 main.go:146] kindnetd IP family: "ipv4"
	I0505 21:32:21.860150       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0505 21:32:32.182919       1 main.go:191] Failed to get nodes, retrying after error: Unauthorized
	I0505 21:32:46.182451       1 main.go:191] Failed to get nodes, retrying after error: Unauthorized
	I0505 21:33:00.185274       1 main.go:191] Failed to get nodes, retrying after error: Unauthorized
	I0505 21:33:04.870290       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0505 21:33:07.942325       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	
	
	==> kindnet [b68524c36b17be1b1024baa8244cb97f9821f0e32ef66ba49016dbc0a2ae5fee] <==
	I0505 21:43:24.277621       1 main.go:250] Node ha-322980-m04 has CIDR [10.244.3.0/24] 
	I0505 21:43:34.287702       1 main.go:223] Handling node with IPs: map[192.168.39.178:{}]
	I0505 21:43:34.287937       1 main.go:227] handling current node
	I0505 21:43:34.288025       1 main.go:223] Handling node with IPs: map[192.168.39.228:{}]
	I0505 21:43:34.288082       1 main.go:250] Node ha-322980-m02 has CIDR [10.244.1.0/24] 
	I0505 21:43:34.288260       1 main.go:223] Handling node with IPs: map[192.168.39.169:{}]
	I0505 21:43:34.288301       1 main.go:250] Node ha-322980-m04 has CIDR [10.244.3.0/24] 
	I0505 21:43:44.305665       1 main.go:223] Handling node with IPs: map[192.168.39.178:{}]
	I0505 21:43:44.305716       1 main.go:227] handling current node
	I0505 21:43:44.305729       1 main.go:223] Handling node with IPs: map[192.168.39.228:{}]
	I0505 21:43:44.305735       1 main.go:250] Node ha-322980-m02 has CIDR [10.244.1.0/24] 
	I0505 21:43:44.305902       1 main.go:223] Handling node with IPs: map[192.168.39.169:{}]
	I0505 21:43:44.305977       1 main.go:250] Node ha-322980-m04 has CIDR [10.244.3.0/24] 
	I0505 21:43:54.364424       1 main.go:223] Handling node with IPs: map[192.168.39.178:{}]
	I0505 21:43:54.456093       1 main.go:227] handling current node
	I0505 21:43:54.456139       1 main.go:223] Handling node with IPs: map[192.168.39.228:{}]
	I0505 21:43:54.456176       1 main.go:250] Node ha-322980-m02 has CIDR [10.244.1.0/24] 
	I0505 21:43:54.456394       1 main.go:223] Handling node with IPs: map[192.168.39.169:{}]
	I0505 21:43:54.456439       1 main.go:250] Node ha-322980-m04 has CIDR [10.244.3.0/24] 
	I0505 21:44:04.473392       1 main.go:223] Handling node with IPs: map[192.168.39.178:{}]
	I0505 21:44:04.473440       1 main.go:227] handling current node
	I0505 21:44:04.473451       1 main.go:223] Handling node with IPs: map[192.168.39.228:{}]
	I0505 21:44:04.473457       1 main.go:250] Node ha-322980-m02 has CIDR [10.244.1.0/24] 
	I0505 21:44:04.473566       1 main.go:223] Handling node with IPs: map[192.168.39.169:{}]
	I0505 21:44:04.473597       1 main.go:250] Node ha-322980-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [455a734814e0474d32e5de7b335134c299d8cc37e15593029ec33a9856671f2c] <==
	I0505 21:36:57.479124       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0505 21:36:57.479144       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0505 21:36:57.479199       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0505 21:36:57.479276       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0505 21:36:57.479283       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0505 21:36:57.562205       1 shared_informer.go:320] Caches are synced for configmaps
	I0505 21:36:57.562292       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0505 21:36:57.567437       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0505 21:36:57.567570       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0505 21:36:57.567667       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0505 21:36:57.568470       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0505 21:36:57.579370       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0505 21:36:57.579447       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0505 21:36:57.579614       1 aggregator.go:165] initial CRD sync complete...
	I0505 21:36:57.579661       1 autoregister_controller.go:141] Starting autoregister controller
	I0505 21:36:57.579668       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0505 21:36:57.579675       1 cache.go:39] Caches are synced for autoregister controller
	I0505 21:36:57.602506       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0505 21:36:57.603839       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0505 21:36:57.603890       1 policy_source.go:224] refreshing policies
	I0505 21:36:57.637588       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0505 21:36:58.475426       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0505 21:36:58.890028       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.178 192.168.39.228]
	I0505 21:36:58.891386       1 controller.go:615] quota admission added evaluator for: endpoints
	I0505 21:36:58.898916       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [e3fb541b398164615ee73073fd074fad2f8e633b76127a0e52466c93d4bdf158] <==
	I0505 21:35:08.412628       1 options.go:221] external host was not specified, using 192.168.39.178
	I0505 21:35:08.415898       1 server.go:148] Version: v1.30.0
	I0505 21:35:08.415962       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0505 21:35:09.162866       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0505 21:35:09.205371       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0505 21:35:09.208018       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0505 21:35:09.208088       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0505 21:35:09.208389       1 instance.go:299] Using reconciler: lease
	W0505 21:35:29.161321       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0505 21:35:29.161886       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0505 21:35:29.210038       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0505 21:35:29.210053       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [bd915c3dba22af90f7aed0f6dfd347efd77a422f8013f9e16206a6c3c7c717d0] <==
	I0505 21:38:17.340502       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0505 21:38:17.340524       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0505 21:38:17.340546       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0505 21:38:17.345556       1 shared_informer.go:320] Caches are synced for taint
	I0505 21:38:17.345717       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0505 21:38:17.349862       1 shared_informer.go:320] Caches are synced for endpoint
	I0505 21:38:17.358466       1 shared_informer.go:320] Caches are synced for PVC protection
	I0505 21:38:17.359835       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0505 21:38:17.362946       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0505 21:38:17.378147       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-322980-m02"
	I0505 21:38:17.378297       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-322980-m04"
	I0505 21:38:17.378396       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-322980"
	I0505 21:38:17.378749       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0505 21:38:17.519972       1 shared_informer.go:320] Caches are synced for disruption
	I0505 21:38:17.525979       1 shared_informer.go:320] Caches are synced for resource quota
	I0505 21:38:17.531281       1 shared_informer.go:320] Caches are synced for resource quota
	I0505 21:38:17.968895       1 shared_informer.go:320] Caches are synced for garbage collector
	I0505 21:38:17.969084       1 shared_informer.go:320] Caches are synced for garbage collector
	I0505 21:38:17.969132       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0505 21:38:52.276280       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-322980-m04"
	I0505 21:38:52.320726       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.053µs"
	I0505 21:38:52.363389       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="264.503µs"
	I0505 21:38:53.118018       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.337µs"
	I0505 21:38:58.753623       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.379435ms"
	I0505 21:38:58.754139       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="116.349µs"
	
	
	==> kube-controller-manager [de61fb39b57ed8a3eac293c7bf5d9a22d35b39c01d327216bcd936614ae3a3bc] <==
	I0505 21:36:23.006072       1 serving.go:380] Generated self-signed cert in-memory
	I0505 21:36:23.189316       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0505 21:36:23.189567       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0505 21:36:23.193301       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0505 21:36:23.194196       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0505 21:36:23.194365       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0505 21:36:23.194516       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0505 21:36:33.196690       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.178:8443/healthz\": dial tcp 192.168.39.178:8443: connect: connection refused"
	
	
	==> kube-proxy [7f29543fe60666ca79cdd859c65f4e012453caa34aec600bb348a902dc8bac60] <==
	I0505 21:35:09.809749       1 server_linux.go:69] "Using iptables proxy"
	E0505 21:35:12.487182       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-322980\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0505 21:35:15.558431       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-322980\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0505 21:35:18.630377       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-322980\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0505 21:35:24.775108       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-322980\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0505 21:35:33.991660       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-322980\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0505 21:35:53.829455       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.178"]
	I0505 21:35:53.908411       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0505 21:35:53.908468       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0505 21:35:53.908487       1 server_linux.go:165] "Using iptables Proxier"
	I0505 21:35:53.913165       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0505 21:35:53.913462       1 server.go:872] "Version info" version="v1.30.0"
	I0505 21:35:53.913513       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0505 21:35:53.915219       1 config.go:192] "Starting service config controller"
	I0505 21:35:53.915315       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0505 21:35:53.915908       1 config.go:101] "Starting endpoint slice config controller"
	I0505 21:35:53.916121       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0505 21:35:53.917449       1 config.go:319] "Starting node config controller"
	I0505 21:35:53.917560       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0505 21:35:54.016906       1 shared_informer.go:320] Caches are synced for service config
	I0505 21:35:54.017168       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0505 21:35:54.018725       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [852f56752c6434b6c374dd0257366bf739e210aee07f80277d63776fd528299b] <==
	W0505 21:31:15.944079       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-322980&resourceVersion=2602": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:31:15.944094       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2604": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:31:15.944189       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-322980&resourceVersion=2602": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:31:25.158902       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-322980&resourceVersion=2602": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:31:25.158993       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-322980&resourceVersion=2602": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:31:25.159118       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2604": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:31:25.159183       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2604": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:31:28.230693       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2604": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:31:28.230870       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2604": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:31:43.591408       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2604": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:31:43.591709       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2604": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:31:43.592238       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2604": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:31:43.592473       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2604": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:31:46.663488       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-322980&resourceVersion=2602": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:31:46.663604       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-322980&resourceVersion=2602": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:32:11.245050       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2604": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:32:11.249052       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2604": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:32:11.248957       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2604": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:32:11.253949       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2604": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:32:17.383674       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-322980&resourceVersion=2602": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:32:17.383888       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-322980&resourceVersion=2602": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:32:45.031724       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2604": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:32:45.032108       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2604": dial tcp 192.168.39.254:8443: connect: no route to host
	W0505 21:33:00.390486       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2604": dial tcp 192.168.39.254:8443: connect: no route to host
	E0505 21:33:00.390704       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2604": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [6e5342fcbb6da2c858cf392ff9e1a2b3a5e9904303f6b5bc2e06929733f8c218] <==
	W0505 21:36:37.450471       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.178:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.178:8443: connect: connection refused
	E0505 21:36:37.450556       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.178:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.178:8443: connect: connection refused
	W0505 21:36:37.699859       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.178:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.178:8443: connect: connection refused
	E0505 21:36:37.700079       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.178:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.178:8443: connect: connection refused
	W0505 21:36:40.066450       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.178:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.178:8443: connect: connection refused
	E0505 21:36:40.066592       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.178:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.178:8443: connect: connection refused
	W0505 21:36:40.236200       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.178:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.178:8443: connect: connection refused
	E0505 21:36:40.236274       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.178:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.178:8443: connect: connection refused
	W0505 21:36:42.493969       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.178:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.178:8443: connect: connection refused
	E0505 21:36:42.494043       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.178:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.178:8443: connect: connection refused
	W0505 21:36:43.629289       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.178:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.178:8443: connect: connection refused
	E0505 21:36:43.629549       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.178:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.178:8443: connect: connection refused
	W0505 21:36:43.978528       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.178:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.178:8443: connect: connection refused
	E0505 21:36:43.978648       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.178:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.178:8443: connect: connection refused
	W0505 21:36:45.544004       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.178:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.178:8443: connect: connection refused
	E0505 21:36:45.544108       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.178:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.178:8443: connect: connection refused
	W0505 21:36:50.738583       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.178:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.178:8443: connect: connection refused
	E0505 21:36:50.738736       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.178:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.178:8443: connect: connection refused
	W0505 21:36:54.707363       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.178:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.178:8443: connect: connection refused
	E0505 21:36:54.707565       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.178:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.178:8443: connect: connection refused
	W0505 21:36:54.924253       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.178:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.178:8443: connect: connection refused
	E0505 21:36:54.924407       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.178:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.178:8443: connect: connection refused
	W0505 21:36:57.482545       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0505 21:36:57.482707       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0505 21:37:46.524089       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [d864b4fda0bb945209c47a2404e8a012f5dac000a178c964de8c1e8cc8cb9a9f] <==
	W0505 21:32:44.527439       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0505 21:32:44.527519       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0505 21:32:44.577580       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0505 21:32:44.577681       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0505 21:32:44.983096       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0505 21:32:44.983210       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0505 21:32:46.708263       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0505 21:32:46.708331       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0505 21:32:46.842131       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0505 21:32:46.842260       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0505 21:32:47.195060       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0505 21:32:47.195248       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0505 21:32:47.357428       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0505 21:32:47.357536       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0505 21:32:49.280307       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0505 21:32:49.280376       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0505 21:32:50.253345       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0505 21:32:50.253408       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0505 21:32:55.016954       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0505 21:32:55.017115       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0505 21:32:56.239220       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0505 21:32:56.239331       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0505 21:32:56.628243       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0505 21:32:56.628317       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0505 21:33:11.656944       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	May 05 21:39:14 ha-322980 kubelet[1385]: E0505 21:39:14.407434    1385 iptables.go:577] "Could not set up iptables canary" err=<
	May 05 21:39:14 ha-322980 kubelet[1385]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 05 21:39:14 ha-322980 kubelet[1385]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 05 21:39:14 ha-322980 kubelet[1385]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 05 21:39:14 ha-322980 kubelet[1385]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 05 21:40:14 ha-322980 kubelet[1385]: E0505 21:40:14.405282    1385 iptables.go:577] "Could not set up iptables canary" err=<
	May 05 21:40:14 ha-322980 kubelet[1385]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 05 21:40:14 ha-322980 kubelet[1385]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 05 21:40:14 ha-322980 kubelet[1385]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 05 21:40:14 ha-322980 kubelet[1385]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 05 21:41:14 ha-322980 kubelet[1385]: E0505 21:41:14.413238    1385 iptables.go:577] "Could not set up iptables canary" err=<
	May 05 21:41:14 ha-322980 kubelet[1385]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 05 21:41:14 ha-322980 kubelet[1385]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 05 21:41:14 ha-322980 kubelet[1385]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 05 21:41:14 ha-322980 kubelet[1385]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 05 21:42:14 ha-322980 kubelet[1385]: E0505 21:42:14.406706    1385 iptables.go:577] "Could not set up iptables canary" err=<
	May 05 21:42:14 ha-322980 kubelet[1385]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 05 21:42:14 ha-322980 kubelet[1385]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 05 21:42:14 ha-322980 kubelet[1385]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 05 21:42:14 ha-322980 kubelet[1385]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 05 21:43:14 ha-322980 kubelet[1385]: E0505 21:43:14.406365    1385 iptables.go:577] "Could not set up iptables canary" err=<
	May 05 21:43:14 ha-322980 kubelet[1385]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 05 21:43:14 ha-322980 kubelet[1385]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 05 21:43:14 ha-322980 kubelet[1385]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 05 21:43:14 ha-322980 kubelet[1385]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0505 21:44:11.990858   41545 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18602-11466/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-322980 -n ha-322980
helpers_test.go:261: (dbg) Run:  kubectl --context ha-322980 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (314.26s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (313.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-019621
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-019621
E0505 21:52:34.873905   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/functional-273789/client.crt: no such file or directory
E0505 21:54:31.829172   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/functional-273789/client.crt: no such file or directory
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-019621: exit status 82 (2m2.715729598s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-019621-m03"  ...
	* Stopping node "multinode-019621-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-019621" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-019621 --wait=true -v=8 --alsologtostderr
E0505 21:56:51.947328   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-019621 --wait=true -v=8 --alsologtostderr: (3m8.259375627s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-019621
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-019621 -n multinode-019621
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019621 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-019621 logs -n 25: (1.761495128s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-019621 ssh -n                                                                 | multinode-019621 | jenkins | v1.33.0 | 05 May 24 21:51 UTC | 05 May 24 21:51 UTC |
	|         | multinode-019621-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-019621 cp multinode-019621-m02:/home/docker/cp-test.txt                       | multinode-019621 | jenkins | v1.33.0 | 05 May 24 21:51 UTC | 05 May 24 21:51 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile971504099/001/cp-test_multinode-019621-m02.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-019621 ssh -n                                                                 | multinode-019621 | jenkins | v1.33.0 | 05 May 24 21:51 UTC | 05 May 24 21:51 UTC |
	|         | multinode-019621-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-019621 cp multinode-019621-m02:/home/docker/cp-test.txt                       | multinode-019621 | jenkins | v1.33.0 | 05 May 24 21:51 UTC | 05 May 24 21:51 UTC |
	|         | multinode-019621:/home/docker/cp-test_multinode-019621-m02_multinode-019621.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-019621 ssh -n                                                                 | multinode-019621 | jenkins | v1.33.0 | 05 May 24 21:51 UTC | 05 May 24 21:51 UTC |
	|         | multinode-019621-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-019621 ssh -n multinode-019621 sudo cat                                       | multinode-019621 | jenkins | v1.33.0 | 05 May 24 21:51 UTC | 05 May 24 21:51 UTC |
	|         | /home/docker/cp-test_multinode-019621-m02_multinode-019621.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-019621 cp multinode-019621-m02:/home/docker/cp-test.txt                       | multinode-019621 | jenkins | v1.33.0 | 05 May 24 21:51 UTC | 05 May 24 21:51 UTC |
	|         | multinode-019621-m03:/home/docker/cp-test_multinode-019621-m02_multinode-019621-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-019621 ssh -n                                                                 | multinode-019621 | jenkins | v1.33.0 | 05 May 24 21:51 UTC | 05 May 24 21:51 UTC |
	|         | multinode-019621-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-019621 ssh -n multinode-019621-m03 sudo cat                                   | multinode-019621 | jenkins | v1.33.0 | 05 May 24 21:51 UTC | 05 May 24 21:51 UTC |
	|         | /home/docker/cp-test_multinode-019621-m02_multinode-019621-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-019621 cp testdata/cp-test.txt                                                | multinode-019621 | jenkins | v1.33.0 | 05 May 24 21:51 UTC | 05 May 24 21:51 UTC |
	|         | multinode-019621-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-019621 ssh -n                                                                 | multinode-019621 | jenkins | v1.33.0 | 05 May 24 21:51 UTC | 05 May 24 21:51 UTC |
	|         | multinode-019621-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-019621 cp multinode-019621-m03:/home/docker/cp-test.txt                       | multinode-019621 | jenkins | v1.33.0 | 05 May 24 21:51 UTC | 05 May 24 21:51 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile971504099/001/cp-test_multinode-019621-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-019621 ssh -n                                                                 | multinode-019621 | jenkins | v1.33.0 | 05 May 24 21:51 UTC | 05 May 24 21:51 UTC |
	|         | multinode-019621-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-019621 cp multinode-019621-m03:/home/docker/cp-test.txt                       | multinode-019621 | jenkins | v1.33.0 | 05 May 24 21:51 UTC | 05 May 24 21:51 UTC |
	|         | multinode-019621:/home/docker/cp-test_multinode-019621-m03_multinode-019621.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-019621 ssh -n                                                                 | multinode-019621 | jenkins | v1.33.0 | 05 May 24 21:51 UTC | 05 May 24 21:51 UTC |
	|         | multinode-019621-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-019621 ssh -n multinode-019621 sudo cat                                       | multinode-019621 | jenkins | v1.33.0 | 05 May 24 21:51 UTC | 05 May 24 21:51 UTC |
	|         | /home/docker/cp-test_multinode-019621-m03_multinode-019621.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-019621 cp multinode-019621-m03:/home/docker/cp-test.txt                       | multinode-019621 | jenkins | v1.33.0 | 05 May 24 21:51 UTC | 05 May 24 21:51 UTC |
	|         | multinode-019621-m02:/home/docker/cp-test_multinode-019621-m03_multinode-019621-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-019621 ssh -n                                                                 | multinode-019621 | jenkins | v1.33.0 | 05 May 24 21:51 UTC | 05 May 24 21:51 UTC |
	|         | multinode-019621-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-019621 ssh -n multinode-019621-m02 sudo cat                                   | multinode-019621 | jenkins | v1.33.0 | 05 May 24 21:51 UTC | 05 May 24 21:51 UTC |
	|         | /home/docker/cp-test_multinode-019621-m03_multinode-019621-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-019621 node stop m03                                                          | multinode-019621 | jenkins | v1.33.0 | 05 May 24 21:51 UTC | 05 May 24 21:52 UTC |
	| node    | multinode-019621 node start                                                             | multinode-019621 | jenkins | v1.33.0 | 05 May 24 21:52 UTC | 05 May 24 21:52 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-019621                                                                | multinode-019621 | jenkins | v1.33.0 | 05 May 24 21:52 UTC |                     |
	| stop    | -p multinode-019621                                                                     | multinode-019621 | jenkins | v1.33.0 | 05 May 24 21:52 UTC |                     |
	| start   | -p multinode-019621                                                                     | multinode-019621 | jenkins | v1.33.0 | 05 May 24 21:54 UTC | 05 May 24 21:57 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-019621                                                                | multinode-019621 | jenkins | v1.33.0 | 05 May 24 21:57 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/05 21:54:35
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0505 21:54:35.365524   48764 out.go:291] Setting OutFile to fd 1 ...
	I0505 21:54:35.365789   48764 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 21:54:35.365800   48764 out.go:304] Setting ErrFile to fd 2...
	I0505 21:54:35.365804   48764 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 21:54:35.365983   48764 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18602-11466/.minikube/bin
	I0505 21:54:35.366565   48764 out.go:298] Setting JSON to false
	I0505 21:54:35.367440   48764 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5822,"bootTime":1714940253,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0505 21:54:35.367520   48764 start.go:139] virtualization: kvm guest
	I0505 21:54:35.370265   48764 out.go:177] * [multinode-019621] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0505 21:54:35.371936   48764 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 21:54:35.371944   48764 notify.go:220] Checking for updates...
	I0505 21:54:35.373550   48764 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 21:54:35.375278   48764 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18602-11466/kubeconfig
	I0505 21:54:35.377007   48764 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18602-11466/.minikube
	I0505 21:54:35.378385   48764 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0505 21:54:35.379805   48764 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 21:54:35.382157   48764 config.go:182] Loaded profile config "multinode-019621": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 21:54:35.382369   48764 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 21:54:35.383439   48764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:54:35.383533   48764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:54:35.398745   48764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32937
	I0505 21:54:35.399202   48764 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:54:35.399755   48764 main.go:141] libmachine: Using API Version  1
	I0505 21:54:35.399780   48764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:54:35.400061   48764 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:54:35.400214   48764 main.go:141] libmachine: (multinode-019621) Calling .DriverName
	I0505 21:54:35.435828   48764 out.go:177] * Using the kvm2 driver based on existing profile
	I0505 21:54:35.437234   48764 start.go:297] selected driver: kvm2
	I0505 21:54:35.437262   48764 start.go:901] validating driver "kvm2" against &{Name:multinode-019621 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.0 ClusterName:multinode-019621 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.30 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.242 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.246 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 21:54:35.437442   48764 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 21:54:35.437888   48764 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 21:54:35.437982   48764 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18602-11466/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0505 21:54:35.452901   48764 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0505 21:54:35.453617   48764 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0505 21:54:35.453683   48764 cni.go:84] Creating CNI manager for ""
	I0505 21:54:35.453699   48764 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0505 21:54:35.453771   48764 start.go:340] cluster config:
	{Name:multinode-019621 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-019621 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.30 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.242 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.246 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 21:54:35.453926   48764 iso.go:125] acquiring lock: {Name:mk05a37c5afd8d748706c016c1665e3846f7161e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 21:54:35.455814   48764 out.go:177] * Starting "multinode-019621" primary control-plane node in "multinode-019621" cluster
	I0505 21:54:35.457279   48764 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0505 21:54:35.457332   48764 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18602-11466/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0505 21:54:35.457343   48764 cache.go:56] Caching tarball of preloaded images
	I0505 21:54:35.457446   48764 preload.go:173] Found /home/jenkins/minikube-integration/18602-11466/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0505 21:54:35.457460   48764 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0505 21:54:35.457614   48764 profile.go:143] Saving config to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/multinode-019621/config.json ...
	I0505 21:54:35.457854   48764 start.go:360] acquireMachinesLock for multinode-019621: {Name:mk7f653ea73351d572a4896c6b37d2e7aa44b4ac Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 21:54:35.457920   48764 start.go:364] duration metric: took 44.275µs to acquireMachinesLock for "multinode-019621"
	I0505 21:54:35.457942   48764 start.go:96] Skipping create...Using existing machine configuration
	I0505 21:54:35.457951   48764 fix.go:54] fixHost starting: 
	I0505 21:54:35.458238   48764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:54:35.458285   48764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:54:35.472999   48764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35003
	I0505 21:54:35.473441   48764 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:54:35.473848   48764 main.go:141] libmachine: Using API Version  1
	I0505 21:54:35.473869   48764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:54:35.474300   48764 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:54:35.474496   48764 main.go:141] libmachine: (multinode-019621) Calling .DriverName
	I0505 21:54:35.474668   48764 main.go:141] libmachine: (multinode-019621) Calling .GetState
	I0505 21:54:35.476278   48764 fix.go:112] recreateIfNeeded on multinode-019621: state=Running err=<nil>
	W0505 21:54:35.476297   48764 fix.go:138] unexpected machine state, will restart: <nil>
	I0505 21:54:35.479165   48764 out.go:177] * Updating the running kvm2 "multinode-019621" VM ...
	I0505 21:54:35.480383   48764 machine.go:94] provisionDockerMachine start ...
	I0505 21:54:35.480400   48764 main.go:141] libmachine: (multinode-019621) Calling .DriverName
	I0505 21:54:35.480590   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHHostname
	I0505 21:54:35.483355   48764 main.go:141] libmachine: (multinode-019621) DBG | domain multinode-019621 has defined MAC address 52:54:00:ac:32:e4 in network mk-multinode-019621
	I0505 21:54:35.483852   48764 main.go:141] libmachine: (multinode-019621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:32:e4", ip: ""} in network mk-multinode-019621: {Iface:virbr1 ExpiryTime:2024-05-05 22:49:31 +0000 UTC Type:0 Mac:52:54:00:ac:32:e4 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:multinode-019621 Clientid:01:52:54:00:ac:32:e4}
	I0505 21:54:35.483885   48764 main.go:141] libmachine: (multinode-019621) DBG | domain multinode-019621 has defined IP address 192.168.39.30 and MAC address 52:54:00:ac:32:e4 in network mk-multinode-019621
	I0505 21:54:35.484031   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHPort
	I0505 21:54:35.484218   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHKeyPath
	I0505 21:54:35.484402   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHKeyPath
	I0505 21:54:35.484629   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHUsername
	I0505 21:54:35.484825   48764 main.go:141] libmachine: Using SSH client type: native
	I0505 21:54:35.485004   48764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.30 22 <nil> <nil>}
	I0505 21:54:35.485016   48764 main.go:141] libmachine: About to run SSH command:
	hostname
	I0505 21:54:35.602100   48764 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-019621
	
	I0505 21:54:35.602139   48764 main.go:141] libmachine: (multinode-019621) Calling .GetMachineName
	I0505 21:54:35.602372   48764 buildroot.go:166] provisioning hostname "multinode-019621"
	I0505 21:54:35.602393   48764 main.go:141] libmachine: (multinode-019621) Calling .GetMachineName
	I0505 21:54:35.602582   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHHostname
	I0505 21:54:35.604950   48764 main.go:141] libmachine: (multinode-019621) DBG | domain multinode-019621 has defined MAC address 52:54:00:ac:32:e4 in network mk-multinode-019621
	I0505 21:54:35.605336   48764 main.go:141] libmachine: (multinode-019621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:32:e4", ip: ""} in network mk-multinode-019621: {Iface:virbr1 ExpiryTime:2024-05-05 22:49:31 +0000 UTC Type:0 Mac:52:54:00:ac:32:e4 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:multinode-019621 Clientid:01:52:54:00:ac:32:e4}
	I0505 21:54:35.605365   48764 main.go:141] libmachine: (multinode-019621) DBG | domain multinode-019621 has defined IP address 192.168.39.30 and MAC address 52:54:00:ac:32:e4 in network mk-multinode-019621
	I0505 21:54:35.605553   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHPort
	I0505 21:54:35.605699   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHKeyPath
	I0505 21:54:35.605852   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHKeyPath
	I0505 21:54:35.605999   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHUsername
	I0505 21:54:35.606157   48764 main.go:141] libmachine: Using SSH client type: native
	I0505 21:54:35.606363   48764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.30 22 <nil> <nil>}
	I0505 21:54:35.606388   48764 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-019621 && echo "multinode-019621" | sudo tee /etc/hostname
	I0505 21:54:35.736303   48764 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-019621
	
	I0505 21:54:35.736335   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHHostname
	I0505 21:54:35.739143   48764 main.go:141] libmachine: (multinode-019621) DBG | domain multinode-019621 has defined MAC address 52:54:00:ac:32:e4 in network mk-multinode-019621
	I0505 21:54:35.739579   48764 main.go:141] libmachine: (multinode-019621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:32:e4", ip: ""} in network mk-multinode-019621: {Iface:virbr1 ExpiryTime:2024-05-05 22:49:31 +0000 UTC Type:0 Mac:52:54:00:ac:32:e4 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:multinode-019621 Clientid:01:52:54:00:ac:32:e4}
	I0505 21:54:35.739614   48764 main.go:141] libmachine: (multinode-019621) DBG | domain multinode-019621 has defined IP address 192.168.39.30 and MAC address 52:54:00:ac:32:e4 in network mk-multinode-019621
	I0505 21:54:35.739765   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHPort
	I0505 21:54:35.739977   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHKeyPath
	I0505 21:54:35.740134   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHKeyPath
	I0505 21:54:35.740307   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHUsername
	I0505 21:54:35.740452   48764 main.go:141] libmachine: Using SSH client type: native
	I0505 21:54:35.740642   48764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.30 22 <nil> <nil>}
	I0505 21:54:35.740667   48764 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-019621' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-019621/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-019621' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0505 21:54:35.852981   48764 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0505 21:54:35.853012   48764 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18602-11466/.minikube CaCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18602-11466/.minikube}
	I0505 21:54:35.853051   48764 buildroot.go:174] setting up certificates
	I0505 21:54:35.853063   48764 provision.go:84] configureAuth start
	I0505 21:54:35.853078   48764 main.go:141] libmachine: (multinode-019621) Calling .GetMachineName
	I0505 21:54:35.853367   48764 main.go:141] libmachine: (multinode-019621) Calling .GetIP
	I0505 21:54:35.856169   48764 main.go:141] libmachine: (multinode-019621) DBG | domain multinode-019621 has defined MAC address 52:54:00:ac:32:e4 in network mk-multinode-019621
	I0505 21:54:35.856523   48764 main.go:141] libmachine: (multinode-019621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:32:e4", ip: ""} in network mk-multinode-019621: {Iface:virbr1 ExpiryTime:2024-05-05 22:49:31 +0000 UTC Type:0 Mac:52:54:00:ac:32:e4 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:multinode-019621 Clientid:01:52:54:00:ac:32:e4}
	I0505 21:54:35.856548   48764 main.go:141] libmachine: (multinode-019621) DBG | domain multinode-019621 has defined IP address 192.168.39.30 and MAC address 52:54:00:ac:32:e4 in network mk-multinode-019621
	I0505 21:54:35.856730   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHHostname
	I0505 21:54:35.858965   48764 main.go:141] libmachine: (multinode-019621) DBG | domain multinode-019621 has defined MAC address 52:54:00:ac:32:e4 in network mk-multinode-019621
	I0505 21:54:35.859458   48764 main.go:141] libmachine: (multinode-019621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:32:e4", ip: ""} in network mk-multinode-019621: {Iface:virbr1 ExpiryTime:2024-05-05 22:49:31 +0000 UTC Type:0 Mac:52:54:00:ac:32:e4 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:multinode-019621 Clientid:01:52:54:00:ac:32:e4}
	I0505 21:54:35.859501   48764 main.go:141] libmachine: (multinode-019621) DBG | domain multinode-019621 has defined IP address 192.168.39.30 and MAC address 52:54:00:ac:32:e4 in network mk-multinode-019621
	I0505 21:54:35.859690   48764 provision.go:143] copyHostCerts
	I0505 21:54:35.859723   48764 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem
	I0505 21:54:35.859773   48764 exec_runner.go:144] found /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem, removing ...
	I0505 21:54:35.859782   48764 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem
	I0505 21:54:35.859864   48764 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem (1078 bytes)
	I0505 21:54:35.859973   48764 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem
	I0505 21:54:35.859998   48764 exec_runner.go:144] found /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem, removing ...
	I0505 21:54:35.860008   48764 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem
	I0505 21:54:35.860046   48764 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem (1123 bytes)
	I0505 21:54:35.860114   48764 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem
	I0505 21:54:35.860138   48764 exec_runner.go:144] found /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem, removing ...
	I0505 21:54:35.860147   48764 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem
	I0505 21:54:35.860181   48764 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem (1675 bytes)
	I0505 21:54:35.860243   48764 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem org=jenkins.multinode-019621 san=[127.0.0.1 192.168.39.30 localhost minikube multinode-019621]
	I0505 21:54:35.938701   48764 provision.go:177] copyRemoteCerts
	I0505 21:54:35.938755   48764 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0505 21:54:35.938777   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHHostname
	I0505 21:54:35.941747   48764 main.go:141] libmachine: (multinode-019621) DBG | domain multinode-019621 has defined MAC address 52:54:00:ac:32:e4 in network mk-multinode-019621
	I0505 21:54:35.942069   48764 main.go:141] libmachine: (multinode-019621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:32:e4", ip: ""} in network mk-multinode-019621: {Iface:virbr1 ExpiryTime:2024-05-05 22:49:31 +0000 UTC Type:0 Mac:52:54:00:ac:32:e4 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:multinode-019621 Clientid:01:52:54:00:ac:32:e4}
	I0505 21:54:35.942097   48764 main.go:141] libmachine: (multinode-019621) DBG | domain multinode-019621 has defined IP address 192.168.39.30 and MAC address 52:54:00:ac:32:e4 in network mk-multinode-019621
	I0505 21:54:35.942302   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHPort
	I0505 21:54:35.942491   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHKeyPath
	I0505 21:54:35.942622   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHUsername
	I0505 21:54:35.942740   48764 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/multinode-019621/id_rsa Username:docker}
	I0505 21:54:36.031277   48764 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0505 21:54:36.031345   48764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0505 21:54:36.061323   48764 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0505 21:54:36.061382   48764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0505 21:54:36.090462   48764 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0505 21:54:36.090522   48764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0505 21:54:36.118349   48764 provision.go:87] duration metric: took 265.274749ms to configureAuth
	I0505 21:54:36.118376   48764 buildroot.go:189] setting minikube options for container-runtime
	I0505 21:54:36.118625   48764 config.go:182] Loaded profile config "multinode-019621": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 21:54:36.118718   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHHostname
	I0505 21:54:36.121585   48764 main.go:141] libmachine: (multinode-019621) DBG | domain multinode-019621 has defined MAC address 52:54:00:ac:32:e4 in network mk-multinode-019621
	I0505 21:54:36.121941   48764 main.go:141] libmachine: (multinode-019621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:32:e4", ip: ""} in network mk-multinode-019621: {Iface:virbr1 ExpiryTime:2024-05-05 22:49:31 +0000 UTC Type:0 Mac:52:54:00:ac:32:e4 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:multinode-019621 Clientid:01:52:54:00:ac:32:e4}
	I0505 21:54:36.121961   48764 main.go:141] libmachine: (multinode-019621) DBG | domain multinode-019621 has defined IP address 192.168.39.30 and MAC address 52:54:00:ac:32:e4 in network mk-multinode-019621
	I0505 21:54:36.122176   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHPort
	I0505 21:54:36.122380   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHKeyPath
	I0505 21:54:36.122560   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHKeyPath
	I0505 21:54:36.122670   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHUsername
	I0505 21:54:36.122827   48764 main.go:141] libmachine: Using SSH client type: native
	I0505 21:54:36.123006   48764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.30 22 <nil> <nil>}
	I0505 21:54:36.123026   48764 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0505 21:56:07.102795   48764 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0505 21:56:07.102823   48764 machine.go:97] duration metric: took 1m31.622428206s to provisionDockerMachine
	I0505 21:56:07.102836   48764 start.go:293] postStartSetup for "multinode-019621" (driver="kvm2")
	I0505 21:56:07.102847   48764 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0505 21:56:07.102867   48764 main.go:141] libmachine: (multinode-019621) Calling .DriverName
	I0505 21:56:07.103218   48764 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0505 21:56:07.103242   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHHostname
	I0505 21:56:07.106285   48764 main.go:141] libmachine: (multinode-019621) DBG | domain multinode-019621 has defined MAC address 52:54:00:ac:32:e4 in network mk-multinode-019621
	I0505 21:56:07.106702   48764 main.go:141] libmachine: (multinode-019621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:32:e4", ip: ""} in network mk-multinode-019621: {Iface:virbr1 ExpiryTime:2024-05-05 22:49:31 +0000 UTC Type:0 Mac:52:54:00:ac:32:e4 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:multinode-019621 Clientid:01:52:54:00:ac:32:e4}
	I0505 21:56:07.106726   48764 main.go:141] libmachine: (multinode-019621) DBG | domain multinode-019621 has defined IP address 192.168.39.30 and MAC address 52:54:00:ac:32:e4 in network mk-multinode-019621
	I0505 21:56:07.106885   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHPort
	I0505 21:56:07.107055   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHKeyPath
	I0505 21:56:07.107241   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHUsername
	I0505 21:56:07.107410   48764 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/multinode-019621/id_rsa Username:docker}
	I0505 21:56:07.196417   48764 ssh_runner.go:195] Run: cat /etc/os-release
	I0505 21:56:07.201193   48764 command_runner.go:130] > NAME=Buildroot
	I0505 21:56:07.201207   48764 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0505 21:56:07.201211   48764 command_runner.go:130] > ID=buildroot
	I0505 21:56:07.201215   48764 command_runner.go:130] > VERSION_ID=2023.02.9
	I0505 21:56:07.201220   48764 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0505 21:56:07.201383   48764 info.go:137] Remote host: Buildroot 2023.02.9
	I0505 21:56:07.201407   48764 filesync.go:126] Scanning /home/jenkins/minikube-integration/18602-11466/.minikube/addons for local assets ...
	I0505 21:56:07.201478   48764 filesync.go:126] Scanning /home/jenkins/minikube-integration/18602-11466/.minikube/files for local assets ...
	I0505 21:56:07.201556   48764 filesync.go:149] local asset: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem -> 187982.pem in /etc/ssl/certs
	I0505 21:56:07.201565   48764 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem -> /etc/ssl/certs/187982.pem
	I0505 21:56:07.201645   48764 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0505 21:56:07.211276   48764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem --> /etc/ssl/certs/187982.pem (1708 bytes)
	I0505 21:56:07.237857   48764 start.go:296] duration metric: took 135.006162ms for postStartSetup
	I0505 21:56:07.237900   48764 fix.go:56] duration metric: took 1m31.779950719s for fixHost
	I0505 21:56:07.237921   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHHostname
	I0505 21:56:07.240675   48764 main.go:141] libmachine: (multinode-019621) DBG | domain multinode-019621 has defined MAC address 52:54:00:ac:32:e4 in network mk-multinode-019621
	I0505 21:56:07.241067   48764 main.go:141] libmachine: (multinode-019621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:32:e4", ip: ""} in network mk-multinode-019621: {Iface:virbr1 ExpiryTime:2024-05-05 22:49:31 +0000 UTC Type:0 Mac:52:54:00:ac:32:e4 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:multinode-019621 Clientid:01:52:54:00:ac:32:e4}
	I0505 21:56:07.241100   48764 main.go:141] libmachine: (multinode-019621) DBG | domain multinode-019621 has defined IP address 192.168.39.30 and MAC address 52:54:00:ac:32:e4 in network mk-multinode-019621
	I0505 21:56:07.241209   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHPort
	I0505 21:56:07.241400   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHKeyPath
	I0505 21:56:07.241562   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHKeyPath
	I0505 21:56:07.241758   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHUsername
	I0505 21:56:07.241924   48764 main.go:141] libmachine: Using SSH client type: native
	I0505 21:56:07.242084   48764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.30 22 <nil> <nil>}
	I0505 21:56:07.242095   48764 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0505 21:56:07.353227   48764 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714946167.327333939
	
	I0505 21:56:07.353259   48764 fix.go:216] guest clock: 1714946167.327333939
	I0505 21:56:07.353266   48764 fix.go:229] Guest: 2024-05-05 21:56:07.327333939 +0000 UTC Remote: 2024-05-05 21:56:07.237905307 +0000 UTC m=+91.920851726 (delta=89.428632ms)
	I0505 21:56:07.353285   48764 fix.go:200] guest clock delta is within tolerance: 89.428632ms
	I0505 21:56:07.353289   48764 start.go:83] releasing machines lock for "multinode-019621", held for 1m31.895357194s
	I0505 21:56:07.353305   48764 main.go:141] libmachine: (multinode-019621) Calling .DriverName
	I0505 21:56:07.353561   48764 main.go:141] libmachine: (multinode-019621) Calling .GetIP
	I0505 21:56:07.356426   48764 main.go:141] libmachine: (multinode-019621) DBG | domain multinode-019621 has defined MAC address 52:54:00:ac:32:e4 in network mk-multinode-019621
	I0505 21:56:07.356757   48764 main.go:141] libmachine: (multinode-019621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:32:e4", ip: ""} in network mk-multinode-019621: {Iface:virbr1 ExpiryTime:2024-05-05 22:49:31 +0000 UTC Type:0 Mac:52:54:00:ac:32:e4 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:multinode-019621 Clientid:01:52:54:00:ac:32:e4}
	I0505 21:56:07.356793   48764 main.go:141] libmachine: (multinode-019621) DBG | domain multinode-019621 has defined IP address 192.168.39.30 and MAC address 52:54:00:ac:32:e4 in network mk-multinode-019621
	I0505 21:56:07.356979   48764 main.go:141] libmachine: (multinode-019621) Calling .DriverName
	I0505 21:56:07.357540   48764 main.go:141] libmachine: (multinode-019621) Calling .DriverName
	I0505 21:56:07.357688   48764 main.go:141] libmachine: (multinode-019621) Calling .DriverName
	I0505 21:56:07.357790   48764 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0505 21:56:07.357827   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHHostname
	I0505 21:56:07.357887   48764 ssh_runner.go:195] Run: cat /version.json
	I0505 21:56:07.357924   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHHostname
	I0505 21:56:07.360512   48764 main.go:141] libmachine: (multinode-019621) DBG | domain multinode-019621 has defined MAC address 52:54:00:ac:32:e4 in network mk-multinode-019621
	I0505 21:56:07.360731   48764 main.go:141] libmachine: (multinode-019621) DBG | domain multinode-019621 has defined MAC address 52:54:00:ac:32:e4 in network mk-multinode-019621
	I0505 21:56:07.360916   48764 main.go:141] libmachine: (multinode-019621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:32:e4", ip: ""} in network mk-multinode-019621: {Iface:virbr1 ExpiryTime:2024-05-05 22:49:31 +0000 UTC Type:0 Mac:52:54:00:ac:32:e4 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:multinode-019621 Clientid:01:52:54:00:ac:32:e4}
	I0505 21:56:07.360943   48764 main.go:141] libmachine: (multinode-019621) DBG | domain multinode-019621 has defined IP address 192.168.39.30 and MAC address 52:54:00:ac:32:e4 in network mk-multinode-019621
	I0505 21:56:07.361066   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHPort
	I0505 21:56:07.361113   48764 main.go:141] libmachine: (multinode-019621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:32:e4", ip: ""} in network mk-multinode-019621: {Iface:virbr1 ExpiryTime:2024-05-05 22:49:31 +0000 UTC Type:0 Mac:52:54:00:ac:32:e4 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:multinode-019621 Clientid:01:52:54:00:ac:32:e4}
	I0505 21:56:07.361141   48764 main.go:141] libmachine: (multinode-019621) DBG | domain multinode-019621 has defined IP address 192.168.39.30 and MAC address 52:54:00:ac:32:e4 in network mk-multinode-019621
	I0505 21:56:07.361237   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHKeyPath
	I0505 21:56:07.361322   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHPort
	I0505 21:56:07.361394   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHUsername
	I0505 21:56:07.361463   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHKeyPath
	I0505 21:56:07.361570   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHUsername
	I0505 21:56:07.361638   48764 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/multinode-019621/id_rsa Username:docker}
	I0505 21:56:07.361707   48764 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/multinode-019621/id_rsa Username:docker}
	I0505 21:56:07.475261   48764 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0505 21:56:07.476098   48764 command_runner.go:130] > {"iso_version": "v1.33.0-1714498396-18779", "kicbase_version": "v0.0.43-1714386659-18769", "minikube_version": "v1.33.0", "commit": "0c7995ab2d4914d5c74027eee5f5d102e19316f2"}
	I0505 21:56:07.476259   48764 ssh_runner.go:195] Run: systemctl --version
	I0505 21:56:07.483431   48764 command_runner.go:130] > systemd 252 (252)
	I0505 21:56:07.483475   48764 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0505 21:56:07.483558   48764 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0505 21:56:07.649464   48764 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0505 21:56:07.658749   48764 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0505 21:56:07.659231   48764 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0505 21:56:07.659313   48764 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0505 21:56:07.669806   48764 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0505 21:56:07.669824   48764 start.go:494] detecting cgroup driver to use...
	I0505 21:56:07.669873   48764 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0505 21:56:07.687511   48764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0505 21:56:07.702609   48764 docker.go:217] disabling cri-docker service (if available) ...
	I0505 21:56:07.702734   48764 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0505 21:56:07.717414   48764 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0505 21:56:07.732316   48764 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0505 21:56:07.888503   48764 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0505 21:56:08.037683   48764 docker.go:233] disabling docker service ...
	I0505 21:56:08.037761   48764 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0505 21:56:08.055727   48764 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0505 21:56:08.070319   48764 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0505 21:56:08.220380   48764 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0505 21:56:08.364699   48764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0505 21:56:08.380706   48764 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0505 21:56:08.401379   48764 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0505 21:56:08.401704   48764 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0505 21:56:08.401763   48764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:56:08.414319   48764 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0505 21:56:08.414387   48764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:56:08.426642   48764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:56:08.440247   48764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:56:08.452959   48764 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0505 21:56:08.466082   48764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:56:08.478827   48764 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:56:08.490821   48764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:56:08.503542   48764 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0505 21:56:08.514535   48764 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0505 21:56:08.514606   48764 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0505 21:56:08.525470   48764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 21:56:08.671769   48764 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0505 21:56:09.263404   48764 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0505 21:56:09.263491   48764 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0505 21:56:09.268914   48764 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0505 21:56:09.268935   48764 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0505 21:56:09.268942   48764 command_runner.go:130] > Device: 0,22	Inode: 1322        Links: 1
	I0505 21:56:09.268948   48764 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0505 21:56:09.268953   48764 command_runner.go:130] > Access: 2024-05-05 21:56:09.121071108 +0000
	I0505 21:56:09.268963   48764 command_runner.go:130] > Modify: 2024-05-05 21:56:09.121071108 +0000
	I0505 21:56:09.268969   48764 command_runner.go:130] > Change: 2024-05-05 21:56:09.121071108 +0000
	I0505 21:56:09.268972   48764 command_runner.go:130] >  Birth: -
	I0505 21:56:09.269150   48764 start.go:562] Will wait 60s for crictl version
	I0505 21:56:09.269211   48764 ssh_runner.go:195] Run: which crictl
	I0505 21:56:09.273557   48764 command_runner.go:130] > /usr/bin/crictl
	I0505 21:56:09.273688   48764 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0505 21:56:09.313317   48764 command_runner.go:130] > Version:  0.1.0
	I0505 21:56:09.313345   48764 command_runner.go:130] > RuntimeName:  cri-o
	I0505 21:56:09.313351   48764 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0505 21:56:09.313356   48764 command_runner.go:130] > RuntimeApiVersion:  v1
	I0505 21:56:09.314643   48764 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0505 21:56:09.314731   48764 ssh_runner.go:195] Run: crio --version
	I0505 21:56:09.345818   48764 command_runner.go:130] > crio version 1.29.1
	I0505 21:56:09.345842   48764 command_runner.go:130] > Version:        1.29.1
	I0505 21:56:09.345852   48764 command_runner.go:130] > GitCommit:      unknown
	I0505 21:56:09.345858   48764 command_runner.go:130] > GitCommitDate:  unknown
	I0505 21:56:09.345864   48764 command_runner.go:130] > GitTreeState:   clean
	I0505 21:56:09.345874   48764 command_runner.go:130] > BuildDate:      2024-04-30T23:23:49Z
	I0505 21:56:09.345880   48764 command_runner.go:130] > GoVersion:      go1.21.6
	I0505 21:56:09.345886   48764 command_runner.go:130] > Compiler:       gc
	I0505 21:56:09.345893   48764 command_runner.go:130] > Platform:       linux/amd64
	I0505 21:56:09.345900   48764 command_runner.go:130] > Linkmode:       dynamic
	I0505 21:56:09.345907   48764 command_runner.go:130] > BuildTags:      
	I0505 21:56:09.345914   48764 command_runner.go:130] >   containers_image_ostree_stub
	I0505 21:56:09.345921   48764 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0505 21:56:09.345932   48764 command_runner.go:130] >   btrfs_noversion
	I0505 21:56:09.345939   48764 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0505 21:56:09.345949   48764 command_runner.go:130] >   libdm_no_deferred_remove
	I0505 21:56:09.345954   48764 command_runner.go:130] >   seccomp
	I0505 21:56:09.345963   48764 command_runner.go:130] > LDFlags:          unknown
	I0505 21:56:09.345969   48764 command_runner.go:130] > SeccompEnabled:   true
	I0505 21:56:09.345975   48764 command_runner.go:130] > AppArmorEnabled:  false
	I0505 21:56:09.346057   48764 ssh_runner.go:195] Run: crio --version
	I0505 21:56:09.382776   48764 command_runner.go:130] > crio version 1.29.1
	I0505 21:56:09.382815   48764 command_runner.go:130] > Version:        1.29.1
	I0505 21:56:09.382821   48764 command_runner.go:130] > GitCommit:      unknown
	I0505 21:56:09.382825   48764 command_runner.go:130] > GitCommitDate:  unknown
	I0505 21:56:09.382829   48764 command_runner.go:130] > GitTreeState:   clean
	I0505 21:56:09.382835   48764 command_runner.go:130] > BuildDate:      2024-04-30T23:23:49Z
	I0505 21:56:09.382845   48764 command_runner.go:130] > GoVersion:      go1.21.6
	I0505 21:56:09.382849   48764 command_runner.go:130] > Compiler:       gc
	I0505 21:56:09.382854   48764 command_runner.go:130] > Platform:       linux/amd64
	I0505 21:56:09.382859   48764 command_runner.go:130] > Linkmode:       dynamic
	I0505 21:56:09.382869   48764 command_runner.go:130] > BuildTags:      
	I0505 21:56:09.382876   48764 command_runner.go:130] >   containers_image_ostree_stub
	I0505 21:56:09.382883   48764 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0505 21:56:09.382889   48764 command_runner.go:130] >   btrfs_noversion
	I0505 21:56:09.382902   48764 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0505 21:56:09.382909   48764 command_runner.go:130] >   libdm_no_deferred_remove
	I0505 21:56:09.382916   48764 command_runner.go:130] >   seccomp
	I0505 21:56:09.382923   48764 command_runner.go:130] > LDFlags:          unknown
	I0505 21:56:09.382930   48764 command_runner.go:130] > SeccompEnabled:   true
	I0505 21:56:09.382937   48764 command_runner.go:130] > AppArmorEnabled:  false
	I0505 21:56:09.386612   48764 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0505 21:56:09.388198   48764 main.go:141] libmachine: (multinode-019621) Calling .GetIP
	I0505 21:56:09.390959   48764 main.go:141] libmachine: (multinode-019621) DBG | domain multinode-019621 has defined MAC address 52:54:00:ac:32:e4 in network mk-multinode-019621
	I0505 21:56:09.391260   48764 main.go:141] libmachine: (multinode-019621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:32:e4", ip: ""} in network mk-multinode-019621: {Iface:virbr1 ExpiryTime:2024-05-05 22:49:31 +0000 UTC Type:0 Mac:52:54:00:ac:32:e4 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:multinode-019621 Clientid:01:52:54:00:ac:32:e4}
	I0505 21:56:09.391293   48764 main.go:141] libmachine: (multinode-019621) DBG | domain multinode-019621 has defined IP address 192.168.39.30 and MAC address 52:54:00:ac:32:e4 in network mk-multinode-019621
	I0505 21:56:09.391497   48764 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0505 21:56:09.396816   48764 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0505 21:56:09.396985   48764 kubeadm.go:877] updating cluster {Name:multinode-019621 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.0 ClusterName:multinode-019621 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.30 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.242 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.246 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0505 21:56:09.397133   48764 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0505 21:56:09.397171   48764 ssh_runner.go:195] Run: sudo crictl images --output json
	I0505 21:56:09.448515   48764 command_runner.go:130] > {
	I0505 21:56:09.448537   48764 command_runner.go:130] >   "images": [
	I0505 21:56:09.448542   48764 command_runner.go:130] >     {
	I0505 21:56:09.448549   48764 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0505 21:56:09.448554   48764 command_runner.go:130] >       "repoTags": [
	I0505 21:56:09.448560   48764 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0505 21:56:09.448564   48764 command_runner.go:130] >       ],
	I0505 21:56:09.448568   48764 command_runner.go:130] >       "repoDigests": [
	I0505 21:56:09.448576   48764 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0505 21:56:09.448583   48764 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0505 21:56:09.448587   48764 command_runner.go:130] >       ],
	I0505 21:56:09.448595   48764 command_runner.go:130] >       "size": "65291810",
	I0505 21:56:09.448601   48764 command_runner.go:130] >       "uid": null,
	I0505 21:56:09.448606   48764 command_runner.go:130] >       "username": "",
	I0505 21:56:09.448615   48764 command_runner.go:130] >       "spec": null,
	I0505 21:56:09.448620   48764 command_runner.go:130] >       "pinned": false
	I0505 21:56:09.448640   48764 command_runner.go:130] >     },
	I0505 21:56:09.448651   48764 command_runner.go:130] >     {
	I0505 21:56:09.448660   48764 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0505 21:56:09.448664   48764 command_runner.go:130] >       "repoTags": [
	I0505 21:56:09.448670   48764 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0505 21:56:09.448676   48764 command_runner.go:130] >       ],
	I0505 21:56:09.448681   48764 command_runner.go:130] >       "repoDigests": [
	I0505 21:56:09.448692   48764 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0505 21:56:09.448702   48764 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0505 21:56:09.448705   48764 command_runner.go:130] >       ],
	I0505 21:56:09.448710   48764 command_runner.go:130] >       "size": "1363676",
	I0505 21:56:09.448715   48764 command_runner.go:130] >       "uid": null,
	I0505 21:56:09.448728   48764 command_runner.go:130] >       "username": "",
	I0505 21:56:09.448738   48764 command_runner.go:130] >       "spec": null,
	I0505 21:56:09.448745   48764 command_runner.go:130] >       "pinned": false
	I0505 21:56:09.448751   48764 command_runner.go:130] >     },
	I0505 21:56:09.448757   48764 command_runner.go:130] >     {
	I0505 21:56:09.448772   48764 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0505 21:56:09.448781   48764 command_runner.go:130] >       "repoTags": [
	I0505 21:56:09.448789   48764 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0505 21:56:09.448797   48764 command_runner.go:130] >       ],
	I0505 21:56:09.448803   48764 command_runner.go:130] >       "repoDigests": [
	I0505 21:56:09.448823   48764 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0505 21:56:09.448838   48764 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0505 21:56:09.448847   48764 command_runner.go:130] >       ],
	I0505 21:56:09.448853   48764 command_runner.go:130] >       "size": "31470524",
	I0505 21:56:09.448863   48764 command_runner.go:130] >       "uid": null,
	I0505 21:56:09.448869   48764 command_runner.go:130] >       "username": "",
	I0505 21:56:09.448878   48764 command_runner.go:130] >       "spec": null,
	I0505 21:56:09.448884   48764 command_runner.go:130] >       "pinned": false
	I0505 21:56:09.448890   48764 command_runner.go:130] >     },
	I0505 21:56:09.448898   48764 command_runner.go:130] >     {
	I0505 21:56:09.448908   48764 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0505 21:56:09.448917   48764 command_runner.go:130] >       "repoTags": [
	I0505 21:56:09.448925   48764 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0505 21:56:09.448934   48764 command_runner.go:130] >       ],
	I0505 21:56:09.448954   48764 command_runner.go:130] >       "repoDigests": [
	I0505 21:56:09.448967   48764 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0505 21:56:09.448981   48764 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0505 21:56:09.448987   48764 command_runner.go:130] >       ],
	I0505 21:56:09.448992   48764 command_runner.go:130] >       "size": "61245718",
	I0505 21:56:09.448996   48764 command_runner.go:130] >       "uid": null,
	I0505 21:56:09.449000   48764 command_runner.go:130] >       "username": "nonroot",
	I0505 21:56:09.449009   48764 command_runner.go:130] >       "spec": null,
	I0505 21:56:09.449015   48764 command_runner.go:130] >       "pinned": false
	I0505 21:56:09.449023   48764 command_runner.go:130] >     },
	I0505 21:56:09.449029   48764 command_runner.go:130] >     {
	I0505 21:56:09.449042   48764 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0505 21:56:09.449051   48764 command_runner.go:130] >       "repoTags": [
	I0505 21:56:09.449058   48764 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0505 21:56:09.449066   48764 command_runner.go:130] >       ],
	I0505 21:56:09.449072   48764 command_runner.go:130] >       "repoDigests": [
	I0505 21:56:09.449084   48764 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0505 21:56:09.449098   48764 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0505 21:56:09.449105   48764 command_runner.go:130] >       ],
	I0505 21:56:09.449109   48764 command_runner.go:130] >       "size": "150779692",
	I0505 21:56:09.449115   48764 command_runner.go:130] >       "uid": {
	I0505 21:56:09.449119   48764 command_runner.go:130] >         "value": "0"
	I0505 21:56:09.449123   48764 command_runner.go:130] >       },
	I0505 21:56:09.449127   48764 command_runner.go:130] >       "username": "",
	I0505 21:56:09.449131   48764 command_runner.go:130] >       "spec": null,
	I0505 21:56:09.449137   48764 command_runner.go:130] >       "pinned": false
	I0505 21:56:09.449141   48764 command_runner.go:130] >     },
	I0505 21:56:09.449144   48764 command_runner.go:130] >     {
	I0505 21:56:09.449150   48764 command_runner.go:130] >       "id": "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0",
	I0505 21:56:09.449160   48764 command_runner.go:130] >       "repoTags": [
	I0505 21:56:09.449165   48764 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.0"
	I0505 21:56:09.449168   48764 command_runner.go:130] >       ],
	I0505 21:56:09.449172   48764 command_runner.go:130] >       "repoDigests": [
	I0505 21:56:09.449179   48764 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81",
	I0505 21:56:09.449189   48764 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3"
	I0505 21:56:09.449192   48764 command_runner.go:130] >       ],
	I0505 21:56:09.449201   48764 command_runner.go:130] >       "size": "117609952",
	I0505 21:56:09.449208   48764 command_runner.go:130] >       "uid": {
	I0505 21:56:09.449212   48764 command_runner.go:130] >         "value": "0"
	I0505 21:56:09.449218   48764 command_runner.go:130] >       },
	I0505 21:56:09.449222   48764 command_runner.go:130] >       "username": "",
	I0505 21:56:09.449226   48764 command_runner.go:130] >       "spec": null,
	I0505 21:56:09.449230   48764 command_runner.go:130] >       "pinned": false
	I0505 21:56:09.449235   48764 command_runner.go:130] >     },
	I0505 21:56:09.449239   48764 command_runner.go:130] >     {
	I0505 21:56:09.449247   48764 command_runner.go:130] >       "id": "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b",
	I0505 21:56:09.449251   48764 command_runner.go:130] >       "repoTags": [
	I0505 21:56:09.449259   48764 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.0"
	I0505 21:56:09.449265   48764 command_runner.go:130] >       ],
	I0505 21:56:09.449272   48764 command_runner.go:130] >       "repoDigests": [
	I0505 21:56:09.449282   48764 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe",
	I0505 21:56:09.449292   48764 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450"
	I0505 21:56:09.449297   48764 command_runner.go:130] >       ],
	I0505 21:56:09.449302   48764 command_runner.go:130] >       "size": "112170310",
	I0505 21:56:09.449307   48764 command_runner.go:130] >       "uid": {
	I0505 21:56:09.449311   48764 command_runner.go:130] >         "value": "0"
	I0505 21:56:09.449316   48764 command_runner.go:130] >       },
	I0505 21:56:09.449320   48764 command_runner.go:130] >       "username": "",
	I0505 21:56:09.449326   48764 command_runner.go:130] >       "spec": null,
	I0505 21:56:09.449330   48764 command_runner.go:130] >       "pinned": false
	I0505 21:56:09.449333   48764 command_runner.go:130] >     },
	I0505 21:56:09.449340   48764 command_runner.go:130] >     {
	I0505 21:56:09.449346   48764 command_runner.go:130] >       "id": "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b",
	I0505 21:56:09.449352   48764 command_runner.go:130] >       "repoTags": [
	I0505 21:56:09.449357   48764 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.0"
	I0505 21:56:09.449363   48764 command_runner.go:130] >       ],
	I0505 21:56:09.449367   48764 command_runner.go:130] >       "repoDigests": [
	I0505 21:56:09.449389   48764 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68",
	I0505 21:56:09.449398   48764 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210"
	I0505 21:56:09.449404   48764 command_runner.go:130] >       ],
	I0505 21:56:09.449409   48764 command_runner.go:130] >       "size": "85932953",
	I0505 21:56:09.449415   48764 command_runner.go:130] >       "uid": null,
	I0505 21:56:09.449423   48764 command_runner.go:130] >       "username": "",
	I0505 21:56:09.449430   48764 command_runner.go:130] >       "spec": null,
	I0505 21:56:09.449434   48764 command_runner.go:130] >       "pinned": false
	I0505 21:56:09.449437   48764 command_runner.go:130] >     },
	I0505 21:56:09.449440   48764 command_runner.go:130] >     {
	I0505 21:56:09.449445   48764 command_runner.go:130] >       "id": "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced",
	I0505 21:56:09.449449   48764 command_runner.go:130] >       "repoTags": [
	I0505 21:56:09.449453   48764 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.0"
	I0505 21:56:09.449457   48764 command_runner.go:130] >       ],
	I0505 21:56:09.449460   48764 command_runner.go:130] >       "repoDigests": [
	I0505 21:56:09.449467   48764 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67",
	I0505 21:56:09.449476   48764 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a"
	I0505 21:56:09.449481   48764 command_runner.go:130] >       ],
	I0505 21:56:09.449485   48764 command_runner.go:130] >       "size": "63026502",
	I0505 21:56:09.449491   48764 command_runner.go:130] >       "uid": {
	I0505 21:56:09.449495   48764 command_runner.go:130] >         "value": "0"
	I0505 21:56:09.449501   48764 command_runner.go:130] >       },
	I0505 21:56:09.449505   48764 command_runner.go:130] >       "username": "",
	I0505 21:56:09.449511   48764 command_runner.go:130] >       "spec": null,
	I0505 21:56:09.449515   48764 command_runner.go:130] >       "pinned": false
	I0505 21:56:09.449520   48764 command_runner.go:130] >     },
	I0505 21:56:09.449523   48764 command_runner.go:130] >     {
	I0505 21:56:09.449531   48764 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0505 21:56:09.449536   48764 command_runner.go:130] >       "repoTags": [
	I0505 21:56:09.449540   48764 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0505 21:56:09.449546   48764 command_runner.go:130] >       ],
	I0505 21:56:09.449549   48764 command_runner.go:130] >       "repoDigests": [
	I0505 21:56:09.449558   48764 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0505 21:56:09.449567   48764 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0505 21:56:09.449571   48764 command_runner.go:130] >       ],
	I0505 21:56:09.449578   48764 command_runner.go:130] >       "size": "750414",
	I0505 21:56:09.449582   48764 command_runner.go:130] >       "uid": {
	I0505 21:56:09.449588   48764 command_runner.go:130] >         "value": "65535"
	I0505 21:56:09.449591   48764 command_runner.go:130] >       },
	I0505 21:56:09.449597   48764 command_runner.go:130] >       "username": "",
	I0505 21:56:09.449602   48764 command_runner.go:130] >       "spec": null,
	I0505 21:56:09.449613   48764 command_runner.go:130] >       "pinned": true
	I0505 21:56:09.449619   48764 command_runner.go:130] >     }
	I0505 21:56:09.449622   48764 command_runner.go:130] >   ]
	I0505 21:56:09.449628   48764 command_runner.go:130] > }
	I0505 21:56:09.449804   48764 crio.go:514] all images are preloaded for cri-o runtime.
	I0505 21:56:09.449817   48764 crio.go:433] Images already preloaded, skipping extraction
	I0505 21:56:09.449876   48764 ssh_runner.go:195] Run: sudo crictl images --output json
	I0505 21:56:09.487917   48764 command_runner.go:130] > {
	I0505 21:56:09.487943   48764 command_runner.go:130] >   "images": [
	I0505 21:56:09.487950   48764 command_runner.go:130] >     {
	I0505 21:56:09.487962   48764 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0505 21:56:09.487969   48764 command_runner.go:130] >       "repoTags": [
	I0505 21:56:09.487979   48764 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0505 21:56:09.487988   48764 command_runner.go:130] >       ],
	I0505 21:56:09.487995   48764 command_runner.go:130] >       "repoDigests": [
	I0505 21:56:09.488014   48764 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0505 21:56:09.488028   48764 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0505 21:56:09.488046   48764 command_runner.go:130] >       ],
	I0505 21:56:09.488057   48764 command_runner.go:130] >       "size": "65291810",
	I0505 21:56:09.488065   48764 command_runner.go:130] >       "uid": null,
	I0505 21:56:09.488069   48764 command_runner.go:130] >       "username": "",
	I0505 21:56:09.488077   48764 command_runner.go:130] >       "spec": null,
	I0505 21:56:09.488083   48764 command_runner.go:130] >       "pinned": false
	I0505 21:56:09.488087   48764 command_runner.go:130] >     },
	I0505 21:56:09.488090   48764 command_runner.go:130] >     {
	I0505 21:56:09.488097   48764 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0505 21:56:09.488103   48764 command_runner.go:130] >       "repoTags": [
	I0505 21:56:09.488108   48764 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0505 21:56:09.488114   48764 command_runner.go:130] >       ],
	I0505 21:56:09.488119   48764 command_runner.go:130] >       "repoDigests": [
	I0505 21:56:09.488129   48764 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0505 21:56:09.488139   48764 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0505 21:56:09.488145   48764 command_runner.go:130] >       ],
	I0505 21:56:09.488149   48764 command_runner.go:130] >       "size": "1363676",
	I0505 21:56:09.488154   48764 command_runner.go:130] >       "uid": null,
	I0505 21:56:09.488162   48764 command_runner.go:130] >       "username": "",
	I0505 21:56:09.488168   48764 command_runner.go:130] >       "spec": null,
	I0505 21:56:09.488171   48764 command_runner.go:130] >       "pinned": false
	I0505 21:56:09.488174   48764 command_runner.go:130] >     },
	I0505 21:56:09.488180   48764 command_runner.go:130] >     {
	I0505 21:56:09.488186   48764 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0505 21:56:09.488192   48764 command_runner.go:130] >       "repoTags": [
	I0505 21:56:09.488197   48764 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0505 21:56:09.488203   48764 command_runner.go:130] >       ],
	I0505 21:56:09.488208   48764 command_runner.go:130] >       "repoDigests": [
	I0505 21:56:09.488217   48764 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0505 21:56:09.488227   48764 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0505 21:56:09.488233   48764 command_runner.go:130] >       ],
	I0505 21:56:09.488238   48764 command_runner.go:130] >       "size": "31470524",
	I0505 21:56:09.488244   48764 command_runner.go:130] >       "uid": null,
	I0505 21:56:09.488248   48764 command_runner.go:130] >       "username": "",
	I0505 21:56:09.488258   48764 command_runner.go:130] >       "spec": null,
	I0505 21:56:09.488265   48764 command_runner.go:130] >       "pinned": false
	I0505 21:56:09.488268   48764 command_runner.go:130] >     },
	I0505 21:56:09.488272   48764 command_runner.go:130] >     {
	I0505 21:56:09.488278   48764 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0505 21:56:09.488285   48764 command_runner.go:130] >       "repoTags": [
	I0505 21:56:09.488290   48764 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0505 21:56:09.488295   48764 command_runner.go:130] >       ],
	I0505 21:56:09.488300   48764 command_runner.go:130] >       "repoDigests": [
	I0505 21:56:09.488309   48764 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0505 21:56:09.488325   48764 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0505 21:56:09.488331   48764 command_runner.go:130] >       ],
	I0505 21:56:09.488338   48764 command_runner.go:130] >       "size": "61245718",
	I0505 21:56:09.488346   48764 command_runner.go:130] >       "uid": null,
	I0505 21:56:09.488356   48764 command_runner.go:130] >       "username": "nonroot",
	I0505 21:56:09.488366   48764 command_runner.go:130] >       "spec": null,
	I0505 21:56:09.488375   48764 command_runner.go:130] >       "pinned": false
	I0505 21:56:09.488383   48764 command_runner.go:130] >     },
	I0505 21:56:09.488391   48764 command_runner.go:130] >     {
	I0505 21:56:09.488401   48764 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0505 21:56:09.488407   48764 command_runner.go:130] >       "repoTags": [
	I0505 21:56:09.488412   48764 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0505 21:56:09.488418   48764 command_runner.go:130] >       ],
	I0505 21:56:09.488422   48764 command_runner.go:130] >       "repoDigests": [
	I0505 21:56:09.488432   48764 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0505 21:56:09.488441   48764 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0505 21:56:09.488447   48764 command_runner.go:130] >       ],
	I0505 21:56:09.488451   48764 command_runner.go:130] >       "size": "150779692",
	I0505 21:56:09.488456   48764 command_runner.go:130] >       "uid": {
	I0505 21:56:09.488460   48764 command_runner.go:130] >         "value": "0"
	I0505 21:56:09.488467   48764 command_runner.go:130] >       },
	I0505 21:56:09.488471   48764 command_runner.go:130] >       "username": "",
	I0505 21:56:09.488477   48764 command_runner.go:130] >       "spec": null,
	I0505 21:56:09.488481   48764 command_runner.go:130] >       "pinned": false
	I0505 21:56:09.488487   48764 command_runner.go:130] >     },
	I0505 21:56:09.488490   48764 command_runner.go:130] >     {
	I0505 21:56:09.488504   48764 command_runner.go:130] >       "id": "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0",
	I0505 21:56:09.488510   48764 command_runner.go:130] >       "repoTags": [
	I0505 21:56:09.488516   48764 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.0"
	I0505 21:56:09.488521   48764 command_runner.go:130] >       ],
	I0505 21:56:09.488525   48764 command_runner.go:130] >       "repoDigests": [
	I0505 21:56:09.488534   48764 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81",
	I0505 21:56:09.488543   48764 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3"
	I0505 21:56:09.488548   48764 command_runner.go:130] >       ],
	I0505 21:56:09.488552   48764 command_runner.go:130] >       "size": "117609952",
	I0505 21:56:09.488558   48764 command_runner.go:130] >       "uid": {
	I0505 21:56:09.488562   48764 command_runner.go:130] >         "value": "0"
	I0505 21:56:09.488568   48764 command_runner.go:130] >       },
	I0505 21:56:09.488572   48764 command_runner.go:130] >       "username": "",
	I0505 21:56:09.488578   48764 command_runner.go:130] >       "spec": null,
	I0505 21:56:09.488582   48764 command_runner.go:130] >       "pinned": false
	I0505 21:56:09.488585   48764 command_runner.go:130] >     },
	I0505 21:56:09.488591   48764 command_runner.go:130] >     {
	I0505 21:56:09.488597   48764 command_runner.go:130] >       "id": "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b",
	I0505 21:56:09.488603   48764 command_runner.go:130] >       "repoTags": [
	I0505 21:56:09.488608   48764 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.0"
	I0505 21:56:09.488614   48764 command_runner.go:130] >       ],
	I0505 21:56:09.488618   48764 command_runner.go:130] >       "repoDigests": [
	I0505 21:56:09.488628   48764 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe",
	I0505 21:56:09.488638   48764 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450"
	I0505 21:56:09.488650   48764 command_runner.go:130] >       ],
	I0505 21:56:09.488656   48764 command_runner.go:130] >       "size": "112170310",
	I0505 21:56:09.488660   48764 command_runner.go:130] >       "uid": {
	I0505 21:56:09.488664   48764 command_runner.go:130] >         "value": "0"
	I0505 21:56:09.488667   48764 command_runner.go:130] >       },
	I0505 21:56:09.488671   48764 command_runner.go:130] >       "username": "",
	I0505 21:56:09.488678   48764 command_runner.go:130] >       "spec": null,
	I0505 21:56:09.488697   48764 command_runner.go:130] >       "pinned": false
	I0505 21:56:09.488703   48764 command_runner.go:130] >     },
	I0505 21:56:09.488707   48764 command_runner.go:130] >     {
	I0505 21:56:09.488712   48764 command_runner.go:130] >       "id": "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b",
	I0505 21:56:09.488716   48764 command_runner.go:130] >       "repoTags": [
	I0505 21:56:09.488725   48764 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.0"
	I0505 21:56:09.488731   48764 command_runner.go:130] >       ],
	I0505 21:56:09.488735   48764 command_runner.go:130] >       "repoDigests": [
	I0505 21:56:09.488757   48764 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68",
	I0505 21:56:09.488767   48764 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210"
	I0505 21:56:09.488770   48764 command_runner.go:130] >       ],
	I0505 21:56:09.488775   48764 command_runner.go:130] >       "size": "85932953",
	I0505 21:56:09.488781   48764 command_runner.go:130] >       "uid": null,
	I0505 21:56:09.488785   48764 command_runner.go:130] >       "username": "",
	I0505 21:56:09.488791   48764 command_runner.go:130] >       "spec": null,
	I0505 21:56:09.488795   48764 command_runner.go:130] >       "pinned": false
	I0505 21:56:09.488800   48764 command_runner.go:130] >     },
	I0505 21:56:09.488804   48764 command_runner.go:130] >     {
	I0505 21:56:09.488814   48764 command_runner.go:130] >       "id": "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced",
	I0505 21:56:09.488824   48764 command_runner.go:130] >       "repoTags": [
	I0505 21:56:09.488836   48764 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.0"
	I0505 21:56:09.488843   48764 command_runner.go:130] >       ],
	I0505 21:56:09.488849   48764 command_runner.go:130] >       "repoDigests": [
	I0505 21:56:09.488862   48764 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67",
	I0505 21:56:09.488877   48764 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a"
	I0505 21:56:09.488886   48764 command_runner.go:130] >       ],
	I0505 21:56:09.488895   48764 command_runner.go:130] >       "size": "63026502",
	I0505 21:56:09.488903   48764 command_runner.go:130] >       "uid": {
	I0505 21:56:09.488910   48764 command_runner.go:130] >         "value": "0"
	I0505 21:56:09.488917   48764 command_runner.go:130] >       },
	I0505 21:56:09.488926   48764 command_runner.go:130] >       "username": "",
	I0505 21:56:09.488932   48764 command_runner.go:130] >       "spec": null,
	I0505 21:56:09.488947   48764 command_runner.go:130] >       "pinned": false
	I0505 21:56:09.488956   48764 command_runner.go:130] >     },
	I0505 21:56:09.488964   48764 command_runner.go:130] >     {
	I0505 21:56:09.488976   48764 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0505 21:56:09.488985   48764 command_runner.go:130] >       "repoTags": [
	I0505 21:56:09.488995   48764 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0505 21:56:09.489003   48764 command_runner.go:130] >       ],
	I0505 21:56:09.489012   48764 command_runner.go:130] >       "repoDigests": [
	I0505 21:56:09.489026   48764 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0505 21:56:09.489046   48764 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0505 21:56:09.489055   48764 command_runner.go:130] >       ],
	I0505 21:56:09.489063   48764 command_runner.go:130] >       "size": "750414",
	I0505 21:56:09.489069   48764 command_runner.go:130] >       "uid": {
	I0505 21:56:09.489077   48764 command_runner.go:130] >         "value": "65535"
	I0505 21:56:09.489086   48764 command_runner.go:130] >       },
	I0505 21:56:09.489093   48764 command_runner.go:130] >       "username": "",
	I0505 21:56:09.489101   48764 command_runner.go:130] >       "spec": null,
	I0505 21:56:09.489109   48764 command_runner.go:130] >       "pinned": true
	I0505 21:56:09.489112   48764 command_runner.go:130] >     }
	I0505 21:56:09.489117   48764 command_runner.go:130] >   ]
	I0505 21:56:09.489120   48764 command_runner.go:130] > }
	I0505 21:56:09.489642   48764 crio.go:514] all images are preloaded for cri-o runtime.
	I0505 21:56:09.489660   48764 cache_images.go:84] Images are preloaded, skipping loading
	I0505 21:56:09.489668   48764 kubeadm.go:928] updating node { 192.168.39.30 8443 v1.30.0 crio true true} ...
	I0505 21:56:09.489775   48764 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-019621 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.30
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-019621 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0505 21:56:09.489844   48764 ssh_runner.go:195] Run: crio config
	I0505 21:56:09.535603   48764 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0505 21:56:09.535627   48764 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0505 21:56:09.535635   48764 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0505 21:56:09.535638   48764 command_runner.go:130] > #
	I0505 21:56:09.535645   48764 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0505 21:56:09.535651   48764 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0505 21:56:09.535659   48764 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0505 21:56:09.535669   48764 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0505 21:56:09.535675   48764 command_runner.go:130] > # reload'.
	I0505 21:56:09.535684   48764 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0505 21:56:09.535694   48764 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0505 21:56:09.535703   48764 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0505 21:56:09.535724   48764 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0505 21:56:09.535729   48764 command_runner.go:130] > [crio]
	I0505 21:56:09.535739   48764 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0505 21:56:09.535747   48764 command_runner.go:130] > # containers images, in this directory.
	I0505 21:56:09.535757   48764 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0505 21:56:09.535773   48764 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0505 21:56:09.536083   48764 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0505 21:56:09.536106   48764 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0505 21:56:09.536398   48764 command_runner.go:130] > # imagestore = ""
	I0505 21:56:09.536415   48764 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0505 21:56:09.536425   48764 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0505 21:56:09.536600   48764 command_runner.go:130] > storage_driver = "overlay"
	I0505 21:56:09.536617   48764 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0505 21:56:09.536626   48764 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0505 21:56:09.536633   48764 command_runner.go:130] > storage_option = [
	I0505 21:56:09.536807   48764 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0505 21:56:09.536953   48764 command_runner.go:130] > ]
	I0505 21:56:09.536969   48764 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0505 21:56:09.536979   48764 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0505 21:56:09.537384   48764 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0505 21:56:09.537399   48764 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0505 21:56:09.537409   48764 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0505 21:56:09.537417   48764 command_runner.go:130] > # always happen on a node reboot
	I0505 21:56:09.537774   48764 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0505 21:56:09.537807   48764 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0505 21:56:09.537823   48764 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0505 21:56:09.537835   48764 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0505 21:56:09.537898   48764 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0505 21:56:09.537916   48764 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0505 21:56:09.537928   48764 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0505 21:56:09.538286   48764 command_runner.go:130] > # internal_wipe = true
	I0505 21:56:09.538304   48764 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0505 21:56:09.538313   48764 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0505 21:56:09.539023   48764 command_runner.go:130] > # internal_repair = false
	I0505 21:56:09.539040   48764 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0505 21:56:09.539050   48764 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0505 21:56:09.539059   48764 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0505 21:56:09.539330   48764 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0505 21:56:09.539346   48764 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0505 21:56:09.539352   48764 command_runner.go:130] > [crio.api]
	I0505 21:56:09.539361   48764 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0505 21:56:09.539370   48764 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0505 21:56:09.539382   48764 command_runner.go:130] > # IP address on which the stream server will listen.
	I0505 21:56:09.539389   48764 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0505 21:56:09.539409   48764 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0505 21:56:09.539418   48764 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0505 21:56:09.539427   48764 command_runner.go:130] > # stream_port = "0"
	I0505 21:56:09.539436   48764 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0505 21:56:09.539446   48764 command_runner.go:130] > # stream_enable_tls = false
	I0505 21:56:09.539457   48764 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0505 21:56:09.539466   48764 command_runner.go:130] > # stream_idle_timeout = ""
	I0505 21:56:09.539476   48764 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0505 21:56:09.539500   48764 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0505 21:56:09.539506   48764 command_runner.go:130] > # minutes.
	I0505 21:56:09.539515   48764 command_runner.go:130] > # stream_tls_cert = ""
	I0505 21:56:09.539524   48764 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0505 21:56:09.539537   48764 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0505 21:56:09.539545   48764 command_runner.go:130] > # stream_tls_key = ""
	I0505 21:56:09.539556   48764 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0505 21:56:09.539566   48764 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0505 21:56:09.539593   48764 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0505 21:56:09.539606   48764 command_runner.go:130] > # stream_tls_ca = ""
	I0505 21:56:09.539617   48764 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0505 21:56:09.539627   48764 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0505 21:56:09.539640   48764 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0505 21:56:09.539658   48764 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0505 21:56:09.539670   48764 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0505 21:56:09.539684   48764 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0505 21:56:09.539698   48764 command_runner.go:130] > [crio.runtime]
	I0505 21:56:09.539711   48764 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0505 21:56:09.539722   48764 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0505 21:56:09.539729   48764 command_runner.go:130] > # "nofile=1024:2048"
	I0505 21:56:09.539741   48764 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0505 21:56:09.539747   48764 command_runner.go:130] > # default_ulimits = [
	I0505 21:56:09.539757   48764 command_runner.go:130] > # ]
	I0505 21:56:09.539766   48764 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0505 21:56:09.539776   48764 command_runner.go:130] > # no_pivot = false
	I0505 21:56:09.539785   48764 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0505 21:56:09.539798   48764 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0505 21:56:09.539809   48764 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0505 21:56:09.539822   48764 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0505 21:56:09.539833   48764 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0505 21:56:09.539848   48764 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0505 21:56:09.539858   48764 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0505 21:56:09.539865   48764 command_runner.go:130] > # Cgroup setting for conmon
	I0505 21:56:09.539879   48764 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0505 21:56:09.539889   48764 command_runner.go:130] > conmon_cgroup = "pod"
	I0505 21:56:09.539899   48764 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0505 21:56:09.539910   48764 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0505 21:56:09.539923   48764 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0505 21:56:09.539929   48764 command_runner.go:130] > conmon_env = [
	I0505 21:56:09.539942   48764 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0505 21:56:09.539948   48764 command_runner.go:130] > ]
	I0505 21:56:09.539953   48764 command_runner.go:130] > # Additional environment variables to set for all the
	I0505 21:56:09.539961   48764 command_runner.go:130] > # containers. These are overridden if set in the
	I0505 21:56:09.539966   48764 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0505 21:56:09.539973   48764 command_runner.go:130] > # default_env = [
	I0505 21:56:09.539976   48764 command_runner.go:130] > # ]
	I0505 21:56:09.539982   48764 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0505 21:56:09.539991   48764 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0505 21:56:09.539995   48764 command_runner.go:130] > # selinux = false
	I0505 21:56:09.540006   48764 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0505 21:56:09.540016   48764 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0505 21:56:09.540021   48764 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0505 21:56:09.540028   48764 command_runner.go:130] > # seccomp_profile = ""
	I0505 21:56:09.540037   48764 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0505 21:56:09.540048   48764 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0505 21:56:09.540057   48764 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0505 21:56:09.540067   48764 command_runner.go:130] > # which might increase security.
	I0505 21:56:09.540078   48764 command_runner.go:130] > # This option is currently deprecated,
	I0505 21:56:09.540090   48764 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0505 21:56:09.540100   48764 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0505 21:56:09.540111   48764 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0505 21:56:09.540128   48764 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0505 21:56:09.540137   48764 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0505 21:56:09.540142   48764 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0505 21:56:09.540153   48764 command_runner.go:130] > # This option supports live configuration reload.
	I0505 21:56:09.540163   48764 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0505 21:56:09.540176   48764 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0505 21:56:09.540186   48764 command_runner.go:130] > # the cgroup blockio controller.
	I0505 21:56:09.540195   48764 command_runner.go:130] > # blockio_config_file = ""
	I0505 21:56:09.540208   48764 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0505 21:56:09.540214   48764 command_runner.go:130] > # blockio parameters.
	I0505 21:56:09.540224   48764 command_runner.go:130] > # blockio_reload = false
	I0505 21:56:09.540236   48764 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0505 21:56:09.540241   48764 command_runner.go:130] > # irqbalance daemon.
	I0505 21:56:09.540253   48764 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0505 21:56:09.540267   48764 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0505 21:56:09.540281   48764 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0505 21:56:09.540294   48764 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0505 21:56:09.540303   48764 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0505 21:56:09.540313   48764 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0505 21:56:09.540324   48764 command_runner.go:130] > # This option supports live configuration reload.
	I0505 21:56:09.540332   48764 command_runner.go:130] > # rdt_config_file = ""
	I0505 21:56:09.540337   48764 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0505 21:56:09.540343   48764 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0505 21:56:09.540372   48764 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0505 21:56:09.540390   48764 command_runner.go:130] > # separate_pull_cgroup = ""
	I0505 21:56:09.540396   48764 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0505 21:56:09.540402   48764 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0505 21:56:09.540406   48764 command_runner.go:130] > # will be added.
	I0505 21:56:09.540410   48764 command_runner.go:130] > # default_capabilities = [
	I0505 21:56:09.540413   48764 command_runner.go:130] > # 	"CHOWN",
	I0505 21:56:09.540417   48764 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0505 21:56:09.540420   48764 command_runner.go:130] > # 	"FSETID",
	I0505 21:56:09.540424   48764 command_runner.go:130] > # 	"FOWNER",
	I0505 21:56:09.540427   48764 command_runner.go:130] > # 	"SETGID",
	I0505 21:56:09.540431   48764 command_runner.go:130] > # 	"SETUID",
	I0505 21:56:09.540435   48764 command_runner.go:130] > # 	"SETPCAP",
	I0505 21:56:09.540439   48764 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0505 21:56:09.540443   48764 command_runner.go:130] > # 	"KILL",
	I0505 21:56:09.540446   48764 command_runner.go:130] > # ]
	I0505 21:56:09.540453   48764 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0505 21:56:09.540462   48764 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0505 21:56:09.540466   48764 command_runner.go:130] > # add_inheritable_capabilities = false
	I0505 21:56:09.540473   48764 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0505 21:56:09.540478   48764 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0505 21:56:09.540484   48764 command_runner.go:130] > default_sysctls = [
	I0505 21:56:09.540489   48764 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0505 21:56:09.540496   48764 command_runner.go:130] > ]
	I0505 21:56:09.540503   48764 command_runner.go:130] > # List of devices on the host that a
	I0505 21:56:09.540516   48764 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0505 21:56:09.540526   48764 command_runner.go:130] > # allowed_devices = [
	I0505 21:56:09.540532   48764 command_runner.go:130] > # 	"/dev/fuse",
	I0505 21:56:09.540541   48764 command_runner.go:130] > # ]
	I0505 21:56:09.540547   48764 command_runner.go:130] > # List of additional devices. specified as
	I0505 21:56:09.540557   48764 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0505 21:56:09.540562   48764 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0505 21:56:09.540570   48764 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0505 21:56:09.540574   48764 command_runner.go:130] > # additional_devices = [
	I0505 21:56:09.540578   48764 command_runner.go:130] > # ]
	I0505 21:56:09.540583   48764 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0505 21:56:09.540586   48764 command_runner.go:130] > # cdi_spec_dirs = [
	I0505 21:56:09.540595   48764 command_runner.go:130] > # 	"/etc/cdi",
	I0505 21:56:09.540601   48764 command_runner.go:130] > # 	"/var/run/cdi",
	I0505 21:56:09.540605   48764 command_runner.go:130] > # ]
	I0505 21:56:09.540611   48764 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0505 21:56:09.540619   48764 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0505 21:56:09.540624   48764 command_runner.go:130] > # Defaults to false.
	I0505 21:56:09.540629   48764 command_runner.go:130] > # device_ownership_from_security_context = false
	I0505 21:56:09.540637   48764 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0505 21:56:09.540643   48764 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0505 21:56:09.540649   48764 command_runner.go:130] > # hooks_dir = [
	I0505 21:56:09.540654   48764 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0505 21:56:09.540660   48764 command_runner.go:130] > # ]
	I0505 21:56:09.540665   48764 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0505 21:56:09.540671   48764 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0505 21:56:09.540678   48764 command_runner.go:130] > # its default mounts from the following two files:
	I0505 21:56:09.540683   48764 command_runner.go:130] > #
	I0505 21:56:09.540703   48764 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0505 21:56:09.540717   48764 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0505 21:56:09.540727   48764 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0505 21:56:09.540730   48764 command_runner.go:130] > #
	I0505 21:56:09.540736   48764 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0505 21:56:09.540744   48764 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0505 21:56:09.540751   48764 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0505 21:56:09.540757   48764 command_runner.go:130] > #      only add mounts it finds in this file.
	I0505 21:56:09.540761   48764 command_runner.go:130] > #
	I0505 21:56:09.540764   48764 command_runner.go:130] > # default_mounts_file = ""
	I0505 21:56:09.540769   48764 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0505 21:56:09.540777   48764 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0505 21:56:09.540780   48764 command_runner.go:130] > pids_limit = 1024
	I0505 21:56:09.540786   48764 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0505 21:56:09.540796   48764 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0505 21:56:09.540809   48764 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0505 21:56:09.540823   48764 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0505 21:56:09.540834   48764 command_runner.go:130] > # log_size_max = -1
	I0505 21:56:09.540845   48764 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0505 21:56:09.540854   48764 command_runner.go:130] > # log_to_journald = false
	I0505 21:56:09.540865   48764 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0505 21:56:09.540873   48764 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0505 21:56:09.540878   48764 command_runner.go:130] > # Path to directory for container attach sockets.
	I0505 21:56:09.540884   48764 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0505 21:56:09.540890   48764 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0505 21:56:09.540896   48764 command_runner.go:130] > # bind_mount_prefix = ""
	I0505 21:56:09.540904   48764 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0505 21:56:09.540914   48764 command_runner.go:130] > # read_only = false
	I0505 21:56:09.540923   48764 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0505 21:56:09.540937   48764 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0505 21:56:09.540950   48764 command_runner.go:130] > # live configuration reload.
	I0505 21:56:09.540960   48764 command_runner.go:130] > # log_level = "info"
	I0505 21:56:09.540969   48764 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0505 21:56:09.540981   48764 command_runner.go:130] > # This option supports live configuration reload.
	I0505 21:56:09.540991   48764 command_runner.go:130] > # log_filter = ""
	I0505 21:56:09.541001   48764 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0505 21:56:09.541014   48764 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0505 21:56:09.541023   48764 command_runner.go:130] > # separated by comma.
	I0505 21:56:09.541045   48764 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0505 21:56:09.541056   48764 command_runner.go:130] > # uid_mappings = ""
	I0505 21:56:09.541066   48764 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0505 21:56:09.541078   48764 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0505 21:56:09.541088   48764 command_runner.go:130] > # separated by comma.
	I0505 21:56:09.541099   48764 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0505 21:56:09.541108   48764 command_runner.go:130] > # gid_mappings = ""
	I0505 21:56:09.541117   48764 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0505 21:56:09.541132   48764 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0505 21:56:09.541145   48764 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0505 21:56:09.541160   48764 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0505 21:56:09.541167   48764 command_runner.go:130] > # minimum_mappable_uid = -1
	I0505 21:56:09.541179   48764 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0505 21:56:09.541191   48764 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0505 21:56:09.541204   48764 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0505 21:56:09.541215   48764 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0505 21:56:09.541228   48764 command_runner.go:130] > # minimum_mappable_gid = -1
	I0505 21:56:09.541240   48764 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0505 21:56:09.541259   48764 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0505 21:56:09.541271   48764 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0505 21:56:09.541280   48764 command_runner.go:130] > # ctr_stop_timeout = 30
	I0505 21:56:09.541289   48764 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0505 21:56:09.541302   48764 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0505 21:56:09.541313   48764 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0505 21:56:09.541323   48764 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0505 21:56:09.541330   48764 command_runner.go:130] > drop_infra_ctr = false
	I0505 21:56:09.541339   48764 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0505 21:56:09.541351   48764 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0505 21:56:09.541363   48764 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0505 21:56:09.541373   48764 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0505 21:56:09.541385   48764 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0505 21:56:09.541398   48764 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0505 21:56:09.541410   48764 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0505 21:56:09.541422   48764 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0505 21:56:09.541432   48764 command_runner.go:130] > # shared_cpuset = ""
	I0505 21:56:09.541442   48764 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0505 21:56:09.541453   48764 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0505 21:56:09.541460   48764 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0505 21:56:09.541481   48764 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0505 21:56:09.541491   48764 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0505 21:56:09.541500   48764 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0505 21:56:09.541513   48764 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0505 21:56:09.541520   48764 command_runner.go:130] > # enable_criu_support = false
	I0505 21:56:09.541532   48764 command_runner.go:130] > # Enable/disable the generation of the container,
	I0505 21:56:09.541542   48764 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0505 21:56:09.541552   48764 command_runner.go:130] > # enable_pod_events = false
	I0505 21:56:09.541562   48764 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0505 21:56:09.541575   48764 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0505 21:56:09.541584   48764 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0505 21:56:09.541594   48764 command_runner.go:130] > # default_runtime = "runc"
	I0505 21:56:09.541602   48764 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0505 21:56:09.541617   48764 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0505 21:56:09.541639   48764 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0505 21:56:09.541649   48764 command_runner.go:130] > # creation as a file is not desired either.
	I0505 21:56:09.541662   48764 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0505 21:56:09.541673   48764 command_runner.go:130] > # the hostname is being managed dynamically.
	I0505 21:56:09.541683   48764 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0505 21:56:09.541688   48764 command_runner.go:130] > # ]
	I0505 21:56:09.541710   48764 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0505 21:56:09.541723   48764 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0505 21:56:09.541735   48764 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0505 21:56:09.541746   48764 command_runner.go:130] > # Each entry in the table should follow the format:
	I0505 21:56:09.541751   48764 command_runner.go:130] > #
	I0505 21:56:09.541760   48764 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0505 21:56:09.541772   48764 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0505 21:56:09.541827   48764 command_runner.go:130] > # runtime_type = "oci"
	I0505 21:56:09.541835   48764 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0505 21:56:09.541840   48764 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0505 21:56:09.541844   48764 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0505 21:56:09.541848   48764 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0505 21:56:09.541852   48764 command_runner.go:130] > # monitor_env = []
	I0505 21:56:09.541856   48764 command_runner.go:130] > # privileged_without_host_devices = false
	I0505 21:56:09.541863   48764 command_runner.go:130] > # allowed_annotations = []
	I0505 21:56:09.541868   48764 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0505 21:56:09.541874   48764 command_runner.go:130] > # Where:
	I0505 21:56:09.541879   48764 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0505 21:56:09.541885   48764 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0505 21:56:09.541893   48764 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0505 21:56:09.541899   48764 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0505 21:56:09.541905   48764 command_runner.go:130] > #   in $PATH.
	I0505 21:56:09.541911   48764 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0505 21:56:09.541916   48764 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0505 21:56:09.541922   48764 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0505 21:56:09.541928   48764 command_runner.go:130] > #   state.
	I0505 21:56:09.541934   48764 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0505 21:56:09.541940   48764 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0505 21:56:09.541946   48764 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0505 21:56:09.541954   48764 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0505 21:56:09.541960   48764 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0505 21:56:09.541968   48764 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0505 21:56:09.541977   48764 command_runner.go:130] > #   The currently recognized values are:
	I0505 21:56:09.541985   48764 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0505 21:56:09.541992   48764 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0505 21:56:09.542000   48764 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0505 21:56:09.542006   48764 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0505 21:56:09.542015   48764 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0505 21:56:09.542021   48764 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0505 21:56:09.542030   48764 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0505 21:56:09.542035   48764 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0505 21:56:09.542043   48764 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0505 21:56:09.542050   48764 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0505 21:56:09.542056   48764 command_runner.go:130] > #   deprecated option "conmon".
	I0505 21:56:09.542063   48764 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0505 21:56:09.542070   48764 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0505 21:56:09.542076   48764 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0505 21:56:09.542083   48764 command_runner.go:130] > #   should be moved to the container's cgroup
	I0505 21:56:09.542089   48764 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0505 21:56:09.542096   48764 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0505 21:56:09.542103   48764 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0505 21:56:09.542110   48764 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0505 21:56:09.542113   48764 command_runner.go:130] > #
	I0505 21:56:09.542118   48764 command_runner.go:130] > # Using the seccomp notifier feature:
	I0505 21:56:09.542122   48764 command_runner.go:130] > #
	I0505 21:56:09.542128   48764 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0505 21:56:09.542135   48764 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0505 21:56:09.542138   48764 command_runner.go:130] > #
	I0505 21:56:09.542143   48764 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0505 21:56:09.542151   48764 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0505 21:56:09.542154   48764 command_runner.go:130] > #
	I0505 21:56:09.542160   48764 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0505 21:56:09.542164   48764 command_runner.go:130] > # feature.
	I0505 21:56:09.542167   48764 command_runner.go:130] > #
	I0505 21:56:09.542173   48764 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0505 21:56:09.542181   48764 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0505 21:56:09.542187   48764 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0505 21:56:09.542195   48764 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0505 21:56:09.542206   48764 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0505 21:56:09.542211   48764 command_runner.go:130] > #
	I0505 21:56:09.542216   48764 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0505 21:56:09.542224   48764 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0505 21:56:09.542228   48764 command_runner.go:130] > #
	I0505 21:56:09.542234   48764 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0505 21:56:09.542244   48764 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0505 21:56:09.542250   48764 command_runner.go:130] > #
	I0505 21:56:09.542255   48764 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0505 21:56:09.542262   48764 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0505 21:56:09.542267   48764 command_runner.go:130] > # limitation.
	I0505 21:56:09.542272   48764 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0505 21:56:09.542277   48764 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0505 21:56:09.542280   48764 command_runner.go:130] > runtime_type = "oci"
	I0505 21:56:09.542284   48764 command_runner.go:130] > runtime_root = "/run/runc"
	I0505 21:56:09.542288   48764 command_runner.go:130] > runtime_config_path = ""
	I0505 21:56:09.542294   48764 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0505 21:56:09.542298   48764 command_runner.go:130] > monitor_cgroup = "pod"
	I0505 21:56:09.542304   48764 command_runner.go:130] > monitor_exec_cgroup = ""
	I0505 21:56:09.542308   48764 command_runner.go:130] > monitor_env = [
	I0505 21:56:09.542316   48764 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0505 21:56:09.542319   48764 command_runner.go:130] > ]
	I0505 21:56:09.542325   48764 command_runner.go:130] > privileged_without_host_devices = false
	I0505 21:56:09.542332   48764 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0505 21:56:09.542339   48764 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0505 21:56:09.542345   48764 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0505 21:56:09.542353   48764 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0505 21:56:09.542360   48764 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0505 21:56:09.542368   48764 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0505 21:56:09.542376   48764 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0505 21:56:09.542386   48764 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0505 21:56:09.542393   48764 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0505 21:56:09.542400   48764 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0505 21:56:09.542406   48764 command_runner.go:130] > # Example:
	I0505 21:56:09.542411   48764 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0505 21:56:09.542418   48764 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0505 21:56:09.542429   48764 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0505 21:56:09.542436   48764 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0505 21:56:09.542440   48764 command_runner.go:130] > # cpuset = 0
	I0505 21:56:09.542446   48764 command_runner.go:130] > # cpushares = "0-1"
	I0505 21:56:09.542449   48764 command_runner.go:130] > # Where:
	I0505 21:56:09.542454   48764 command_runner.go:130] > # The workload name is workload-type.
	I0505 21:56:09.542463   48764 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0505 21:56:09.542468   48764 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0505 21:56:09.542473   48764 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0505 21:56:09.542483   48764 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0505 21:56:09.542489   48764 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0505 21:56:09.542496   48764 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0505 21:56:09.542502   48764 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0505 21:56:09.542508   48764 command_runner.go:130] > # Default value is set to true
	I0505 21:56:09.542512   48764 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0505 21:56:09.542519   48764 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0505 21:56:09.542523   48764 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0505 21:56:09.542530   48764 command_runner.go:130] > # Default value is set to 'false'
	I0505 21:56:09.542534   48764 command_runner.go:130] > # disable_hostport_mapping = false
	I0505 21:56:09.542540   48764 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0505 21:56:09.542545   48764 command_runner.go:130] > #
	I0505 21:56:09.542551   48764 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0505 21:56:09.542559   48764 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0505 21:56:09.542565   48764 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0505 21:56:09.542571   48764 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0505 21:56:09.542576   48764 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0505 21:56:09.542579   48764 command_runner.go:130] > [crio.image]
	I0505 21:56:09.542584   48764 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0505 21:56:09.542588   48764 command_runner.go:130] > # default_transport = "docker://"
	I0505 21:56:09.542594   48764 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0505 21:56:09.542600   48764 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0505 21:56:09.542603   48764 command_runner.go:130] > # global_auth_file = ""
	I0505 21:56:09.542608   48764 command_runner.go:130] > # The image used to instantiate infra containers.
	I0505 21:56:09.542612   48764 command_runner.go:130] > # This option supports live configuration reload.
	I0505 21:56:09.542617   48764 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0505 21:56:09.542622   48764 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0505 21:56:09.542632   48764 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0505 21:56:09.542637   48764 command_runner.go:130] > # This option supports live configuration reload.
	I0505 21:56:09.542640   48764 command_runner.go:130] > # pause_image_auth_file = ""
	I0505 21:56:09.542645   48764 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0505 21:56:09.542651   48764 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0505 21:56:09.542656   48764 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0505 21:56:09.542661   48764 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0505 21:56:09.542665   48764 command_runner.go:130] > # pause_command = "/pause"
	I0505 21:56:09.542671   48764 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0505 21:56:09.542676   48764 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0505 21:56:09.542682   48764 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0505 21:56:09.542687   48764 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0505 21:56:09.542697   48764 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0505 21:56:09.542708   48764 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0505 21:56:09.542714   48764 command_runner.go:130] > # pinned_images = [
	I0505 21:56:09.542718   48764 command_runner.go:130] > # ]
	I0505 21:56:09.542723   48764 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0505 21:56:09.542729   48764 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0505 21:56:09.542736   48764 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0505 21:56:09.542742   48764 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0505 21:56:09.542749   48764 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0505 21:56:09.542753   48764 command_runner.go:130] > # signature_policy = ""
	I0505 21:56:09.542759   48764 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0505 21:56:09.542764   48764 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0505 21:56:09.542773   48764 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0505 21:56:09.542779   48764 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0505 21:56:09.542787   48764 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0505 21:56:09.542792   48764 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0505 21:56:09.542800   48764 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0505 21:56:09.542806   48764 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0505 21:56:09.542812   48764 command_runner.go:130] > # changing them here.
	I0505 21:56:09.542816   48764 command_runner.go:130] > # insecure_registries = [
	I0505 21:56:09.542819   48764 command_runner.go:130] > # ]
	I0505 21:56:09.542825   48764 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0505 21:56:09.542832   48764 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0505 21:56:09.542836   48764 command_runner.go:130] > # image_volumes = "mkdir"
	I0505 21:56:09.542851   48764 command_runner.go:130] > # Temporary directory to use for storing big files
	I0505 21:56:09.542858   48764 command_runner.go:130] > # big_files_temporary_dir = ""
	I0505 21:56:09.542863   48764 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0505 21:56:09.542869   48764 command_runner.go:130] > # CNI plugins.
	I0505 21:56:09.542873   48764 command_runner.go:130] > [crio.network]
	I0505 21:56:09.542880   48764 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0505 21:56:09.542885   48764 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0505 21:56:09.542890   48764 command_runner.go:130] > # cni_default_network = ""
	I0505 21:56:09.542896   48764 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0505 21:56:09.542902   48764 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0505 21:56:09.542907   48764 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0505 21:56:09.542910   48764 command_runner.go:130] > # plugin_dirs = [
	I0505 21:56:09.542916   48764 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0505 21:56:09.542919   48764 command_runner.go:130] > # ]
	I0505 21:56:09.542924   48764 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0505 21:56:09.542930   48764 command_runner.go:130] > [crio.metrics]
	I0505 21:56:09.542935   48764 command_runner.go:130] > # Globally enable or disable metrics support.
	I0505 21:56:09.542939   48764 command_runner.go:130] > enable_metrics = true
	I0505 21:56:09.542948   48764 command_runner.go:130] > # Specify enabled metrics collectors.
	I0505 21:56:09.542955   48764 command_runner.go:130] > # Per default all metrics are enabled.
	I0505 21:56:09.542961   48764 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0505 21:56:09.542969   48764 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0505 21:56:09.542975   48764 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0505 21:56:09.542981   48764 command_runner.go:130] > # metrics_collectors = [
	I0505 21:56:09.542985   48764 command_runner.go:130] > # 	"operations",
	I0505 21:56:09.542989   48764 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0505 21:56:09.542994   48764 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0505 21:56:09.542999   48764 command_runner.go:130] > # 	"operations_errors",
	I0505 21:56:09.543004   48764 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0505 21:56:09.543010   48764 command_runner.go:130] > # 	"image_pulls_by_name",
	I0505 21:56:09.543014   48764 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0505 21:56:09.543020   48764 command_runner.go:130] > # 	"image_pulls_failures",
	I0505 21:56:09.543024   48764 command_runner.go:130] > # 	"image_pulls_successes",
	I0505 21:56:09.543030   48764 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0505 21:56:09.543034   48764 command_runner.go:130] > # 	"image_layer_reuse",
	I0505 21:56:09.543038   48764 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0505 21:56:09.543047   48764 command_runner.go:130] > # 	"containers_oom_total",
	I0505 21:56:09.543053   48764 command_runner.go:130] > # 	"containers_oom",
	I0505 21:56:09.543057   48764 command_runner.go:130] > # 	"processes_defunct",
	I0505 21:56:09.543062   48764 command_runner.go:130] > # 	"operations_total",
	I0505 21:56:09.543067   48764 command_runner.go:130] > # 	"operations_latency_seconds",
	I0505 21:56:09.543074   48764 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0505 21:56:09.543078   48764 command_runner.go:130] > # 	"operations_errors_total",
	I0505 21:56:09.543082   48764 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0505 21:56:09.543086   48764 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0505 21:56:09.543093   48764 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0505 21:56:09.543097   48764 command_runner.go:130] > # 	"image_pulls_success_total",
	I0505 21:56:09.543101   48764 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0505 21:56:09.543107   48764 command_runner.go:130] > # 	"containers_oom_count_total",
	I0505 21:56:09.543112   48764 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0505 21:56:09.543117   48764 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0505 21:56:09.543120   48764 command_runner.go:130] > # ]
	I0505 21:56:09.543127   48764 command_runner.go:130] > # The port on which the metrics server will listen.
	I0505 21:56:09.543131   48764 command_runner.go:130] > # metrics_port = 9090
	I0505 21:56:09.543138   48764 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0505 21:56:09.543141   48764 command_runner.go:130] > # metrics_socket = ""
	I0505 21:56:09.543151   48764 command_runner.go:130] > # The certificate for the secure metrics server.
	I0505 21:56:09.543160   48764 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0505 21:56:09.543166   48764 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0505 21:56:09.543173   48764 command_runner.go:130] > # certificate on any modification event.
	I0505 21:56:09.543177   48764 command_runner.go:130] > # metrics_cert = ""
	I0505 21:56:09.543183   48764 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0505 21:56:09.543188   48764 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0505 21:56:09.543981   48764 command_runner.go:130] > # metrics_key = ""
	I0505 21:56:09.544004   48764 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0505 21:56:09.544010   48764 command_runner.go:130] > [crio.tracing]
	I0505 21:56:09.544019   48764 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0505 21:56:09.544025   48764 command_runner.go:130] > # enable_tracing = false
	I0505 21:56:09.544032   48764 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0505 21:56:09.544040   48764 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0505 21:56:09.544053   48764 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0505 21:56:09.544063   48764 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0505 21:56:09.544079   48764 command_runner.go:130] > # CRI-O NRI configuration.
	I0505 21:56:09.544089   48764 command_runner.go:130] > [crio.nri]
	I0505 21:56:09.544095   48764 command_runner.go:130] > # Globally enable or disable NRI.
	I0505 21:56:09.545138   48764 command_runner.go:130] > # enable_nri = false
	I0505 21:56:09.545148   48764 command_runner.go:130] > # NRI socket to listen on.
	I0505 21:56:09.545153   48764 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0505 21:56:09.545157   48764 command_runner.go:130] > # NRI plugin directory to use.
	I0505 21:56:09.545161   48764 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0505 21:56:09.545166   48764 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0505 21:56:09.545170   48764 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0505 21:56:09.545175   48764 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0505 21:56:09.545180   48764 command_runner.go:130] > # nri_disable_connections = false
	I0505 21:56:09.545186   48764 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0505 21:56:09.545191   48764 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0505 21:56:09.545198   48764 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0505 21:56:09.545203   48764 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0505 21:56:09.545208   48764 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0505 21:56:09.545212   48764 command_runner.go:130] > [crio.stats]
	I0505 21:56:09.545220   48764 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0505 21:56:09.545225   48764 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0505 21:56:09.545229   48764 command_runner.go:130] > # stats_collection_period = 0
	I0505 21:56:09.545642   48764 command_runner.go:130] ! time="2024-05-05 21:56:09.501338966Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0505 21:56:09.545662   48764 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0505 21:56:09.545787   48764 cni.go:84] Creating CNI manager for ""
	I0505 21:56:09.545800   48764 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0505 21:56:09.545809   48764 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0505 21:56:09.545828   48764 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.30 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-019621 NodeName:multinode-019621 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.30"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.30 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0505 21:56:09.545964   48764 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.30
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-019621"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.30
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.30"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0505 21:56:09.546031   48764 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0505 21:56:09.558281   48764 command_runner.go:130] > kubeadm
	I0505 21:56:09.558301   48764 command_runner.go:130] > kubectl
	I0505 21:56:09.558306   48764 command_runner.go:130] > kubelet
	I0505 21:56:09.558372   48764 binaries.go:44] Found k8s binaries, skipping transfer
	I0505 21:56:09.558427   48764 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0505 21:56:09.569885   48764 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0505 21:56:09.588324   48764 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0505 21:56:09.606458   48764 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0505 21:56:09.624553   48764 ssh_runner.go:195] Run: grep 192.168.39.30	control-plane.minikube.internal$ /etc/hosts
	I0505 21:56:09.628741   48764 command_runner.go:130] > 192.168.39.30	control-plane.minikube.internal
	I0505 21:56:09.628792   48764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 21:56:09.770333   48764 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0505 21:56:09.788875   48764 certs.go:68] Setting up /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/multinode-019621 for IP: 192.168.39.30
	I0505 21:56:09.788902   48764 certs.go:194] generating shared ca certs ...
	I0505 21:56:09.788922   48764 certs.go:226] acquiring lock for ca certs: {Name:mk9a8f97724855697074b08194c6247f8f9e5c84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 21:56:09.789107   48764 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key
	I0505 21:56:09.789172   48764 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key
	I0505 21:56:09.789185   48764 certs.go:256] generating profile certs ...
	I0505 21:56:09.789291   48764 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/multinode-019621/client.key
	I0505 21:56:09.789377   48764 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/multinode-019621/apiserver.key.2eb61cd2
	I0505 21:56:09.789432   48764 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/multinode-019621/proxy-client.key
	I0505 21:56:09.789445   48764 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0505 21:56:09.789461   48764 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0505 21:56:09.789477   48764 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0505 21:56:09.789489   48764 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0505 21:56:09.789501   48764 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/multinode-019621/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0505 21:56:09.789513   48764 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/multinode-019621/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0505 21:56:09.789525   48764 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/multinode-019621/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0505 21:56:09.789542   48764 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/multinode-019621/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0505 21:56:09.789593   48764 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798.pem (1338 bytes)
	W0505 21:56:09.789622   48764 certs.go:480] ignoring /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798_empty.pem, impossibly tiny 0 bytes
	I0505 21:56:09.789632   48764 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem (1675 bytes)
	I0505 21:56:09.789654   48764 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem (1078 bytes)
	I0505 21:56:09.789686   48764 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem (1123 bytes)
	I0505 21:56:09.789709   48764 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem (1675 bytes)
	I0505 21:56:09.789753   48764 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem (1708 bytes)
	I0505 21:56:09.789787   48764 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem -> /usr/share/ca-certificates/187982.pem
	I0505 21:56:09.789798   48764 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0505 21:56:09.789822   48764 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798.pem -> /usr/share/ca-certificates/18798.pem
	I0505 21:56:09.790443   48764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0505 21:56:09.817370   48764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0505 21:56:09.842975   48764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0505 21:56:09.868984   48764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0505 21:56:09.895031   48764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/multinode-019621/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0505 21:56:09.921744   48764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/multinode-019621/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0505 21:56:09.949042   48764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/multinode-019621/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0505 21:56:09.976965   48764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/multinode-019621/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0505 21:56:10.003808   48764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem --> /usr/share/ca-certificates/187982.pem (1708 bytes)
	I0505 21:56:10.029460   48764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0505 21:56:10.056338   48764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798.pem --> /usr/share/ca-certificates/18798.pem (1338 bytes)
	I0505 21:56:10.082631   48764 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0505 21:56:10.101057   48764 ssh_runner.go:195] Run: openssl version
	I0505 21:56:10.107338   48764 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0505 21:56:10.107404   48764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/187982.pem && ln -fs /usr/share/ca-certificates/187982.pem /etc/ssl/certs/187982.pem"
	I0505 21:56:10.119456   48764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/187982.pem
	I0505 21:56:10.124275   48764 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May  5 21:11 /usr/share/ca-certificates/187982.pem
	I0505 21:56:10.124553   48764 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  5 21:11 /usr/share/ca-certificates/187982.pem
	I0505 21:56:10.124601   48764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/187982.pem
	I0505 21:56:10.130568   48764 command_runner.go:130] > 3ec20f2e
	I0505 21:56:10.130759   48764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/187982.pem /etc/ssl/certs/3ec20f2e.0"
	I0505 21:56:10.141618   48764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0505 21:56:10.154479   48764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0505 21:56:10.159344   48764 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May  5 20:58 /usr/share/ca-certificates/minikubeCA.pem
	I0505 21:56:10.159495   48764 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  5 20:58 /usr/share/ca-certificates/minikubeCA.pem
	I0505 21:56:10.159543   48764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0505 21:56:10.165489   48764 command_runner.go:130] > b5213941
	I0505 21:56:10.165636   48764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0505 21:56:10.176651   48764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18798.pem && ln -fs /usr/share/ca-certificates/18798.pem /etc/ssl/certs/18798.pem"
	I0505 21:56:10.189570   48764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18798.pem
	I0505 21:56:10.194672   48764 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May  5 21:11 /usr/share/ca-certificates/18798.pem
	I0505 21:56:10.194782   48764 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  5 21:11 /usr/share/ca-certificates/18798.pem
	I0505 21:56:10.194836   48764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18798.pem
	I0505 21:56:10.201288   48764 command_runner.go:130] > 51391683
	I0505 21:56:10.201333   48764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18798.pem /etc/ssl/certs/51391683.0"
	I0505 21:56:10.212405   48764 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0505 21:56:10.217480   48764 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0505 21:56:10.217497   48764 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0505 21:56:10.217503   48764 command_runner.go:130] > Device: 253,1	Inode: 533782      Links: 1
	I0505 21:56:10.217509   48764 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0505 21:56:10.217521   48764 command_runner.go:130] > Access: 2024-05-05 21:49:49.082649477 +0000
	I0505 21:56:10.217526   48764 command_runner.go:130] > Modify: 2024-05-05 21:49:49.082649477 +0000
	I0505 21:56:10.217532   48764 command_runner.go:130] > Change: 2024-05-05 21:49:49.082649477 +0000
	I0505 21:56:10.217538   48764 command_runner.go:130] >  Birth: 2024-05-05 21:49:49.082649477 +0000
	I0505 21:56:10.217575   48764 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0505 21:56:10.223473   48764 command_runner.go:130] > Certificate will not expire
	I0505 21:56:10.223540   48764 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0505 21:56:10.229300   48764 command_runner.go:130] > Certificate will not expire
	I0505 21:56:10.229478   48764 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0505 21:56:10.235376   48764 command_runner.go:130] > Certificate will not expire
	I0505 21:56:10.235432   48764 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0505 21:56:10.241113   48764 command_runner.go:130] > Certificate will not expire
	I0505 21:56:10.241272   48764 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0505 21:56:10.247291   48764 command_runner.go:130] > Certificate will not expire
	I0505 21:56:10.247336   48764 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0505 21:56:10.253541   48764 command_runner.go:130] > Certificate will not expire
	I0505 21:56:10.253603   48764 kubeadm.go:391] StartCluster: {Name:multinode-019621 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
0 ClusterName:multinode-019621 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.30 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.242 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.246 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 21:56:10.253711   48764 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0505 21:56:10.253762   48764 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0505 21:56:10.294445   48764 command_runner.go:130] > 848a28f73e60cfa07ad217666af7c053f5c8bede9e3ee3935d24e4522fa3ecc0
	I0505 21:56:10.294468   48764 command_runner.go:130] > b21f2ab80afb53329996f80118878f1af7b09f155fee5dc009087a64169de32d
	I0505 21:56:10.294477   48764 command_runner.go:130] > 43ae3bcd41585aec8645b9074511ba7e2c6162386f12f31e554687b448fe2ad2
	I0505 21:56:10.294582   48764 command_runner.go:130] > 2014ff87bd1ebe119fdb57654420ef50b567e385a498d454505b7bd8e029c60b
	I0505 21:56:10.294602   48764 command_runner.go:130] > 5cd2dc1892eb7cbb28aef581e7c5692c56ba211f18f9af26dc1fe0d0dd8402dc
	I0505 21:56:10.294608   48764 command_runner.go:130] > b1b5f166a5cf3df5211e9fab19839dd1f65f15d8402aa2dfe1f66c30164d4eed
	I0505 21:56:10.294613   48764 command_runner.go:130] > f0e5121525f07b44bfc271cc8fb255bbd6cb4d46c6a240e2acbb2ef8f79411e9
	I0505 21:56:10.294632   48764 command_runner.go:130] > e409273ba65ef4d25df83acbee19bccd69eb814951a8daf5f0bfee075503ac12
	I0505 21:56:10.296255   48764 cri.go:89] found id: "848a28f73e60cfa07ad217666af7c053f5c8bede9e3ee3935d24e4522fa3ecc0"
	I0505 21:56:10.296273   48764 cri.go:89] found id: "b21f2ab80afb53329996f80118878f1af7b09f155fee5dc009087a64169de32d"
	I0505 21:56:10.296277   48764 cri.go:89] found id: "43ae3bcd41585aec8645b9074511ba7e2c6162386f12f31e554687b448fe2ad2"
	I0505 21:56:10.296280   48764 cri.go:89] found id: "2014ff87bd1ebe119fdb57654420ef50b567e385a498d454505b7bd8e029c60b"
	I0505 21:56:10.296283   48764 cri.go:89] found id: "5cd2dc1892eb7cbb28aef581e7c5692c56ba211f18f9af26dc1fe0d0dd8402dc"
	I0505 21:56:10.296295   48764 cri.go:89] found id: "b1b5f166a5cf3df5211e9fab19839dd1f65f15d8402aa2dfe1f66c30164d4eed"
	I0505 21:56:10.296300   48764 cri.go:89] found id: "f0e5121525f07b44bfc271cc8fb255bbd6cb4d46c6a240e2acbb2ef8f79411e9"
	I0505 21:56:10.296302   48764 cri.go:89] found id: "e409273ba65ef4d25df83acbee19bccd69eb814951a8daf5f0bfee075503ac12"
	I0505 21:56:10.296305   48764 cri.go:89] found id: ""
	I0505 21:56:10.296341   48764 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	May 05 21:57:44 multinode-019621 crio[2853]: time="2024-05-05 21:57:44.313843446Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714946264313820963,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=35f5afb7-c22f-498e-a9c3-857eaa318130 name=/runtime.v1.ImageService/ImageFsInfo
	May 05 21:57:44 multinode-019621 crio[2853]: time="2024-05-05 21:57:44.314668402Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e5c1bc91-cf1d-48ca-b4ea-72d754a47cd6 name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:57:44 multinode-019621 crio[2853]: time="2024-05-05 21:57:44.314721761Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e5c1bc91-cf1d-48ca-b4ea-72d754a47cd6 name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:57:44 multinode-019621 crio[2853]: time="2024-05-05 21:57:44.315033679Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f38ee383fdf32ce314e35433b654e830e7250cea2d508b078521c284bc60924f,PodSandboxId:631e97f79659b007d136917a9d2f46140e23ecf80cb6f4f0ab2ad4a97d3db047,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714946210455842191,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cl7hp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fc46928d-642e-418f-9db6-c496cedab268,},Annotations:map[string]string{io.kubernetes.container.hash: 1fd86ba1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1c57f4a374d74eb1448cb10d635af2105a36b5f1e974454c4460ad7319c5b5b,PodSandboxId:e494359e7189b7ff5c60a0a8a37c1739ca008f20e7601d83c1c236085703e846,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714946176946596272,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kbqkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2119b45-a792-4860-8906-6d4b422fa032,},Annotations:map[string]string{io.kubernetes.container.hash: 7fa514b1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ca286dc16d8881c75da53df18d0329888f64922bf73d2db385591f8ce0a85b7,PodSandboxId:2585b130ba2ab56b797cd8b935b0e57fc8445015582d0dce2fa8531d2e9b53f6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714946176832636523,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h7tbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11e1eeff-5441-44cd-94b0-b4a8b7773170,},Annotations:map[string]string{io.kubernetes.container.hash: 9d6250a4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88a7ed5f5366d4904c962d6a5989c8bafc8e05b1e16b34cde7a98a2134a8a5f6,PodSandboxId:cacf3ad15dd83fd41913544962727b985e406c539993518f94654225e72dad6c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714946176756685413,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cpdww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cd95fb4-395f-40b0-ac69-985877734928,},Annotations:map[string]
string{io.kubernetes.container.hash: 3601c664,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ea3da7bad03d86b9ef31bb9735cec11ef284df54d515a6f6906c24b8e4311c7,PodSandboxId:cbc5b8b78dcd217d1a72bf21ed4500dc59b38089cd9ad7ddd1b862c40bc24be4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714946176682655527,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec4a14d9-48ae-4f7b-9f1c-2d17443a9abe,},Annotations:map[string]string{io.ku
bernetes.container.hash: c0a01775,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03073d2772bd2a7f14478d4361a75d75e3f0fbcfe076e864d01d053b846bea19,PodSandboxId:16c19bc428099ffb84d9f8635ed9f6281c0122750bd94de6de1570e5460c31a8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714946173024867485,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbd856791294522fc51ff5b62e7cd54b,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fef37118d160fb7416289be0385bebb9522627a84ea0821a6e1a5fe5da2c813,PodSandboxId:5426c6040d9cb314d8240d3313c84587611dde745ceb6fde3b31462003ef9f42,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714946173001893254,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2e468251ab344e0f881c437c1f0a903,},Annotations:map[string]string{io.kubernetes.container.hash: 41c7532d,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08a22997b781ac30b4bd928c16f4a5e38f5337b76ca9899db71f29d8edfd2a0c,PodSandboxId:62f801040656ab5246f61c3ef5e73d3d6db78b22ea1898643e348aba7107e64d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714946172984520067,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf972160d8d32eacd1f5a47e70108580,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0156d27216fa4c488bc142152c48c26b8fe0f7dd51cd40d07ab7b86679487f2d,PodSandboxId:39d26eeac3507e62e15ba216a92d6758a717a483486c9e194fca690fbac95a5f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714946172880117774,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f6a79907fce204a362a8fa77bf50a9,},Annotations:map[string]string{io.kubernetes.container.hash: e5f44add,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5da8dc883b84b13a00d7ddd7caced9b04f1b4fd587ba4727f9ea222bbdc7448c,PodSandboxId:a6044c7de56cf44e67d7336bdc0191bc14846e84cce141a5a0395377b69ff60f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714945864848857026,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cl7hp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fc46928d-642e-418f-9db6-c496cedab268,},Annotations:map[string]string{io.kubernetes.container.hash: 1fd86ba1,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:848a28f73e60cfa07ad217666af7c053f5c8bede9e3ee3935d24e4522fa3ecc0,PodSandboxId:fee2569ec6aa75279f17fcb1566e3fe2781d53455fef40438622e00187a8fd77,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714945815673792748,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec4a14d9-48ae-4f7b-9f1c-2d17443a9abe,},Annotations:map[string]string{io.kubernetes.container.hash: c0a01775,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b21f2ab80afb53329996f80118878f1af7b09f155fee5dc009087a64169de32d,PodSandboxId:5fda04970b0cc64a8463850f5ee2636112964bc5ec454aa8de47571bf93d708b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714945814455833158,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h7tbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11e1eeff-5441-44cd-94b0-b4a8b7773170,},Annotations:map[string]string{io.kubernetes.container.hash: 9d6250a4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43ae3bcd41585aec8645b9074511ba7e2c6162386f12f31e554687b448fe2ad2,PodSandboxId:ef9db81f865f272643499a8e2fcac3272fadfac4ca783a2a3aa364192748e609,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714945812603553395,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kbqkb,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: e2119b45-a792-4860-8906-6d4b422fa032,},Annotations:map[string]string{io.kubernetes.container.hash: 7fa514b1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2014ff87bd1ebe119fdb57654420ef50b567e385a498d454505b7bd8e029c60b,PodSandboxId:0fa7490de9d77ae139e7eed838891222f792885e55a3ccc27b94b7285801ea50,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714945812248205273,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cpdww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cd95fb4-395f-40b0-ac69
-985877734928,},Annotations:map[string]string{io.kubernetes.container.hash: 3601c664,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1b5f166a5cf3df5211e9fab19839dd1f65f15d8402aa2dfe1f66c30164d4eed,PodSandboxId:af74c44c9922dd31f1a88db886d1cdae1b1b7e660b9b70f84e705f12e9ec515c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714945793129600792,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbd856791294522fc51ff5b62e7cd54b,}
,Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cd2dc1892eb7cbb28aef581e7c5692c56ba211f18f9af26dc1fe0d0dd8402dc,PodSandboxId:adf73ed15a3d661567c1445806a1d1df39751b1ae5a0a9f2f91a62e6e199221d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714945793156946994,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2e468251ab344e0f881c437c1f0a903,},Annotations:map[string]string{io.kubernetes.
container.hash: 41c7532d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0e5121525f07b44bfc271cc8fb255bbd6cb4d46c6a240e2acbb2ef8f79411e9,PodSandboxId:812d6350fe96a1738c16f9474b8987ce4310622dcd7e9de22f99483c4601fa24,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714945793051911024,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f6a79907fce204a362a8fa77bf50a9,},Annotations:map[string]string{io.kubernetes.container.hash:
e5f44add,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e409273ba65ef4d25df83acbee19bccd69eb814951a8daf5f0bfee075503ac12,PodSandboxId:40300c2b4d51c791fa276ce6a11f3e9a063b13d10c325dd0dbd1d3c4926fc92c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714945792995840064,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf972160d8d32eacd1f5a47e70108580,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e5c1bc91-cf1d-48ca-b4ea-72d754a47cd6 name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:57:44 multinode-019621 crio[2853]: time="2024-05-05 21:57:44.368123557Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0173a3bf-b7fb-497b-bc3f-e58c729e11bf name=/runtime.v1.RuntimeService/Version
	May 05 21:57:44 multinode-019621 crio[2853]: time="2024-05-05 21:57:44.368239987Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0173a3bf-b7fb-497b-bc3f-e58c729e11bf name=/runtime.v1.RuntimeService/Version
	May 05 21:57:44 multinode-019621 crio[2853]: time="2024-05-05 21:57:44.370035385Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=086b39cd-4d7f-4777-af49-4e8cbff7d48a name=/runtime.v1.ImageService/ImageFsInfo
	May 05 21:57:44 multinode-019621 crio[2853]: time="2024-05-05 21:57:44.370656408Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714946264370629302,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=086b39cd-4d7f-4777-af49-4e8cbff7d48a name=/runtime.v1.ImageService/ImageFsInfo
	May 05 21:57:44 multinode-019621 crio[2853]: time="2024-05-05 21:57:44.371795980Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=79464ad5-273f-4f4e-9fa2-7270bb25f271 name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:57:44 multinode-019621 crio[2853]: time="2024-05-05 21:57:44.371855581Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=79464ad5-273f-4f4e-9fa2-7270bb25f271 name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:57:44 multinode-019621 crio[2853]: time="2024-05-05 21:57:44.372189093Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f38ee383fdf32ce314e35433b654e830e7250cea2d508b078521c284bc60924f,PodSandboxId:631e97f79659b007d136917a9d2f46140e23ecf80cb6f4f0ab2ad4a97d3db047,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714946210455842191,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cl7hp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fc46928d-642e-418f-9db6-c496cedab268,},Annotations:map[string]string{io.kubernetes.container.hash: 1fd86ba1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1c57f4a374d74eb1448cb10d635af2105a36b5f1e974454c4460ad7319c5b5b,PodSandboxId:e494359e7189b7ff5c60a0a8a37c1739ca008f20e7601d83c1c236085703e846,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714946176946596272,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kbqkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2119b45-a792-4860-8906-6d4b422fa032,},Annotations:map[string]string{io.kubernetes.container.hash: 7fa514b1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ca286dc16d8881c75da53df18d0329888f64922bf73d2db385591f8ce0a85b7,PodSandboxId:2585b130ba2ab56b797cd8b935b0e57fc8445015582d0dce2fa8531d2e9b53f6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714946176832636523,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h7tbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11e1eeff-5441-44cd-94b0-b4a8b7773170,},Annotations:map[string]string{io.kubernetes.container.hash: 9d6250a4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88a7ed5f5366d4904c962d6a5989c8bafc8e05b1e16b34cde7a98a2134a8a5f6,PodSandboxId:cacf3ad15dd83fd41913544962727b985e406c539993518f94654225e72dad6c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714946176756685413,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cpdww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cd95fb4-395f-40b0-ac69-985877734928,},Annotations:map[string]
string{io.kubernetes.container.hash: 3601c664,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ea3da7bad03d86b9ef31bb9735cec11ef284df54d515a6f6906c24b8e4311c7,PodSandboxId:cbc5b8b78dcd217d1a72bf21ed4500dc59b38089cd9ad7ddd1b862c40bc24be4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714946176682655527,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec4a14d9-48ae-4f7b-9f1c-2d17443a9abe,},Annotations:map[string]string{io.ku
bernetes.container.hash: c0a01775,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03073d2772bd2a7f14478d4361a75d75e3f0fbcfe076e864d01d053b846bea19,PodSandboxId:16c19bc428099ffb84d9f8635ed9f6281c0122750bd94de6de1570e5460c31a8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714946173024867485,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbd856791294522fc51ff5b62e7cd54b,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fef37118d160fb7416289be0385bebb9522627a84ea0821a6e1a5fe5da2c813,PodSandboxId:5426c6040d9cb314d8240d3313c84587611dde745ceb6fde3b31462003ef9f42,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714946173001893254,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2e468251ab344e0f881c437c1f0a903,},Annotations:map[string]string{io.kubernetes.container.hash: 41c7532d,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08a22997b781ac30b4bd928c16f4a5e38f5337b76ca9899db71f29d8edfd2a0c,PodSandboxId:62f801040656ab5246f61c3ef5e73d3d6db78b22ea1898643e348aba7107e64d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714946172984520067,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf972160d8d32eacd1f5a47e70108580,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0156d27216fa4c488bc142152c48c26b8fe0f7dd51cd40d07ab7b86679487f2d,PodSandboxId:39d26eeac3507e62e15ba216a92d6758a717a483486c9e194fca690fbac95a5f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714946172880117774,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f6a79907fce204a362a8fa77bf50a9,},Annotations:map[string]string{io.kubernetes.container.hash: e5f44add,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5da8dc883b84b13a00d7ddd7caced9b04f1b4fd587ba4727f9ea222bbdc7448c,PodSandboxId:a6044c7de56cf44e67d7336bdc0191bc14846e84cce141a5a0395377b69ff60f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714945864848857026,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cl7hp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fc46928d-642e-418f-9db6-c496cedab268,},Annotations:map[string]string{io.kubernetes.container.hash: 1fd86ba1,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:848a28f73e60cfa07ad217666af7c053f5c8bede9e3ee3935d24e4522fa3ecc0,PodSandboxId:fee2569ec6aa75279f17fcb1566e3fe2781d53455fef40438622e00187a8fd77,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714945815673792748,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec4a14d9-48ae-4f7b-9f1c-2d17443a9abe,},Annotations:map[string]string{io.kubernetes.container.hash: c0a01775,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b21f2ab80afb53329996f80118878f1af7b09f155fee5dc009087a64169de32d,PodSandboxId:5fda04970b0cc64a8463850f5ee2636112964bc5ec454aa8de47571bf93d708b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714945814455833158,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h7tbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11e1eeff-5441-44cd-94b0-b4a8b7773170,},Annotations:map[string]string{io.kubernetes.container.hash: 9d6250a4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43ae3bcd41585aec8645b9074511ba7e2c6162386f12f31e554687b448fe2ad2,PodSandboxId:ef9db81f865f272643499a8e2fcac3272fadfac4ca783a2a3aa364192748e609,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714945812603553395,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kbqkb,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: e2119b45-a792-4860-8906-6d4b422fa032,},Annotations:map[string]string{io.kubernetes.container.hash: 7fa514b1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2014ff87bd1ebe119fdb57654420ef50b567e385a498d454505b7bd8e029c60b,PodSandboxId:0fa7490de9d77ae139e7eed838891222f792885e55a3ccc27b94b7285801ea50,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714945812248205273,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cpdww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cd95fb4-395f-40b0-ac69
-985877734928,},Annotations:map[string]string{io.kubernetes.container.hash: 3601c664,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1b5f166a5cf3df5211e9fab19839dd1f65f15d8402aa2dfe1f66c30164d4eed,PodSandboxId:af74c44c9922dd31f1a88db886d1cdae1b1b7e660b9b70f84e705f12e9ec515c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714945793129600792,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbd856791294522fc51ff5b62e7cd54b,}
,Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cd2dc1892eb7cbb28aef581e7c5692c56ba211f18f9af26dc1fe0d0dd8402dc,PodSandboxId:adf73ed15a3d661567c1445806a1d1df39751b1ae5a0a9f2f91a62e6e199221d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714945793156946994,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2e468251ab344e0f881c437c1f0a903,},Annotations:map[string]string{io.kubernetes.
container.hash: 41c7532d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0e5121525f07b44bfc271cc8fb255bbd6cb4d46c6a240e2acbb2ef8f79411e9,PodSandboxId:812d6350fe96a1738c16f9474b8987ce4310622dcd7e9de22f99483c4601fa24,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714945793051911024,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f6a79907fce204a362a8fa77bf50a9,},Annotations:map[string]string{io.kubernetes.container.hash:
e5f44add,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e409273ba65ef4d25df83acbee19bccd69eb814951a8daf5f0bfee075503ac12,PodSandboxId:40300c2b4d51c791fa276ce6a11f3e9a063b13d10c325dd0dbd1d3c4926fc92c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714945792995840064,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf972160d8d32eacd1f5a47e70108580,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=79464ad5-273f-4f4e-9fa2-7270bb25f271 name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:57:44 multinode-019621 crio[2853]: time="2024-05-05 21:57:44.418768123Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ff542d6c-8cb6-40eb-9980-b41cb0020f06 name=/runtime.v1.RuntimeService/Version
	May 05 21:57:44 multinode-019621 crio[2853]: time="2024-05-05 21:57:44.418840450Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ff542d6c-8cb6-40eb-9980-b41cb0020f06 name=/runtime.v1.RuntimeService/Version
	May 05 21:57:44 multinode-019621 crio[2853]: time="2024-05-05 21:57:44.420835404Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8d133f2a-8b07-4fd7-a307-27cdc010017c name=/runtime.v1.ImageService/ImageFsInfo
	May 05 21:57:44 multinode-019621 crio[2853]: time="2024-05-05 21:57:44.421627480Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714946264421599404,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8d133f2a-8b07-4fd7-a307-27cdc010017c name=/runtime.v1.ImageService/ImageFsInfo
	May 05 21:57:44 multinode-019621 crio[2853]: time="2024-05-05 21:57:44.422340584Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4a6f0b38-fa29-44bc-aeff-14bac9bbdd63 name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:57:44 multinode-019621 crio[2853]: time="2024-05-05 21:57:44.422479197Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4a6f0b38-fa29-44bc-aeff-14bac9bbdd63 name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:57:44 multinode-019621 crio[2853]: time="2024-05-05 21:57:44.422821184Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f38ee383fdf32ce314e35433b654e830e7250cea2d508b078521c284bc60924f,PodSandboxId:631e97f79659b007d136917a9d2f46140e23ecf80cb6f4f0ab2ad4a97d3db047,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714946210455842191,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cl7hp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fc46928d-642e-418f-9db6-c496cedab268,},Annotations:map[string]string{io.kubernetes.container.hash: 1fd86ba1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1c57f4a374d74eb1448cb10d635af2105a36b5f1e974454c4460ad7319c5b5b,PodSandboxId:e494359e7189b7ff5c60a0a8a37c1739ca008f20e7601d83c1c236085703e846,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714946176946596272,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kbqkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2119b45-a792-4860-8906-6d4b422fa032,},Annotations:map[string]string{io.kubernetes.container.hash: 7fa514b1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ca286dc16d8881c75da53df18d0329888f64922bf73d2db385591f8ce0a85b7,PodSandboxId:2585b130ba2ab56b797cd8b935b0e57fc8445015582d0dce2fa8531d2e9b53f6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714946176832636523,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h7tbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11e1eeff-5441-44cd-94b0-b4a8b7773170,},Annotations:map[string]string{io.kubernetes.container.hash: 9d6250a4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88a7ed5f5366d4904c962d6a5989c8bafc8e05b1e16b34cde7a98a2134a8a5f6,PodSandboxId:cacf3ad15dd83fd41913544962727b985e406c539993518f94654225e72dad6c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714946176756685413,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cpdww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cd95fb4-395f-40b0-ac69-985877734928,},Annotations:map[string]
string{io.kubernetes.container.hash: 3601c664,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ea3da7bad03d86b9ef31bb9735cec11ef284df54d515a6f6906c24b8e4311c7,PodSandboxId:cbc5b8b78dcd217d1a72bf21ed4500dc59b38089cd9ad7ddd1b862c40bc24be4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714946176682655527,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec4a14d9-48ae-4f7b-9f1c-2d17443a9abe,},Annotations:map[string]string{io.ku
bernetes.container.hash: c0a01775,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03073d2772bd2a7f14478d4361a75d75e3f0fbcfe076e864d01d053b846bea19,PodSandboxId:16c19bc428099ffb84d9f8635ed9f6281c0122750bd94de6de1570e5460c31a8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714946173024867485,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbd856791294522fc51ff5b62e7cd54b,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fef37118d160fb7416289be0385bebb9522627a84ea0821a6e1a5fe5da2c813,PodSandboxId:5426c6040d9cb314d8240d3313c84587611dde745ceb6fde3b31462003ef9f42,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714946173001893254,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2e468251ab344e0f881c437c1f0a903,},Annotations:map[string]string{io.kubernetes.container.hash: 41c7532d,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08a22997b781ac30b4bd928c16f4a5e38f5337b76ca9899db71f29d8edfd2a0c,PodSandboxId:62f801040656ab5246f61c3ef5e73d3d6db78b22ea1898643e348aba7107e64d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714946172984520067,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf972160d8d32eacd1f5a47e70108580,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0156d27216fa4c488bc142152c48c26b8fe0f7dd51cd40d07ab7b86679487f2d,PodSandboxId:39d26eeac3507e62e15ba216a92d6758a717a483486c9e194fca690fbac95a5f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714946172880117774,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f6a79907fce204a362a8fa77bf50a9,},Annotations:map[string]string{io.kubernetes.container.hash: e5f44add,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5da8dc883b84b13a00d7ddd7caced9b04f1b4fd587ba4727f9ea222bbdc7448c,PodSandboxId:a6044c7de56cf44e67d7336bdc0191bc14846e84cce141a5a0395377b69ff60f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714945864848857026,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cl7hp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fc46928d-642e-418f-9db6-c496cedab268,},Annotations:map[string]string{io.kubernetes.container.hash: 1fd86ba1,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:848a28f73e60cfa07ad217666af7c053f5c8bede9e3ee3935d24e4522fa3ecc0,PodSandboxId:fee2569ec6aa75279f17fcb1566e3fe2781d53455fef40438622e00187a8fd77,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714945815673792748,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec4a14d9-48ae-4f7b-9f1c-2d17443a9abe,},Annotations:map[string]string{io.kubernetes.container.hash: c0a01775,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b21f2ab80afb53329996f80118878f1af7b09f155fee5dc009087a64169de32d,PodSandboxId:5fda04970b0cc64a8463850f5ee2636112964bc5ec454aa8de47571bf93d708b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714945814455833158,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h7tbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11e1eeff-5441-44cd-94b0-b4a8b7773170,},Annotations:map[string]string{io.kubernetes.container.hash: 9d6250a4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43ae3bcd41585aec8645b9074511ba7e2c6162386f12f31e554687b448fe2ad2,PodSandboxId:ef9db81f865f272643499a8e2fcac3272fadfac4ca783a2a3aa364192748e609,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714945812603553395,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kbqkb,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: e2119b45-a792-4860-8906-6d4b422fa032,},Annotations:map[string]string{io.kubernetes.container.hash: 7fa514b1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2014ff87bd1ebe119fdb57654420ef50b567e385a498d454505b7bd8e029c60b,PodSandboxId:0fa7490de9d77ae139e7eed838891222f792885e55a3ccc27b94b7285801ea50,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714945812248205273,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cpdww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cd95fb4-395f-40b0-ac69
-985877734928,},Annotations:map[string]string{io.kubernetes.container.hash: 3601c664,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1b5f166a5cf3df5211e9fab19839dd1f65f15d8402aa2dfe1f66c30164d4eed,PodSandboxId:af74c44c9922dd31f1a88db886d1cdae1b1b7e660b9b70f84e705f12e9ec515c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714945793129600792,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbd856791294522fc51ff5b62e7cd54b,}
,Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cd2dc1892eb7cbb28aef581e7c5692c56ba211f18f9af26dc1fe0d0dd8402dc,PodSandboxId:adf73ed15a3d661567c1445806a1d1df39751b1ae5a0a9f2f91a62e6e199221d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714945793156946994,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2e468251ab344e0f881c437c1f0a903,},Annotations:map[string]string{io.kubernetes.
container.hash: 41c7532d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0e5121525f07b44bfc271cc8fb255bbd6cb4d46c6a240e2acbb2ef8f79411e9,PodSandboxId:812d6350fe96a1738c16f9474b8987ce4310622dcd7e9de22f99483c4601fa24,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714945793051911024,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f6a79907fce204a362a8fa77bf50a9,},Annotations:map[string]string{io.kubernetes.container.hash:
e5f44add,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e409273ba65ef4d25df83acbee19bccd69eb814951a8daf5f0bfee075503ac12,PodSandboxId:40300c2b4d51c791fa276ce6a11f3e9a063b13d10c325dd0dbd1d3c4926fc92c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714945792995840064,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf972160d8d32eacd1f5a47e70108580,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4a6f0b38-fa29-44bc-aeff-14bac9bbdd63 name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:57:44 multinode-019621 crio[2853]: time="2024-05-05 21:57:44.466959348Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6dd6b66d-a505-421f-b2c9-030e423afe1c name=/runtime.v1.RuntimeService/Version
	May 05 21:57:44 multinode-019621 crio[2853]: time="2024-05-05 21:57:44.467061229Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6dd6b66d-a505-421f-b2c9-030e423afe1c name=/runtime.v1.RuntimeService/Version
	May 05 21:57:44 multinode-019621 crio[2853]: time="2024-05-05 21:57:44.468023380Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=558b2686-c13d-4556-81e3-180335f0dd1f name=/runtime.v1.ImageService/ImageFsInfo
	May 05 21:57:44 multinode-019621 crio[2853]: time="2024-05-05 21:57:44.468572552Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714946264468547146,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=558b2686-c13d-4556-81e3-180335f0dd1f name=/runtime.v1.ImageService/ImageFsInfo
	May 05 21:57:44 multinode-019621 crio[2853]: time="2024-05-05 21:57:44.469635525Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2073f472-e7a4-407d-a6bf-857eaa5bb4b1 name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:57:44 multinode-019621 crio[2853]: time="2024-05-05 21:57:44.469941422Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2073f472-e7a4-407d-a6bf-857eaa5bb4b1 name=/runtime.v1.RuntimeService/ListContainers
	May 05 21:57:44 multinode-019621 crio[2853]: time="2024-05-05 21:57:44.470286722Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f38ee383fdf32ce314e35433b654e830e7250cea2d508b078521c284bc60924f,PodSandboxId:631e97f79659b007d136917a9d2f46140e23ecf80cb6f4f0ab2ad4a97d3db047,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714946210455842191,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cl7hp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fc46928d-642e-418f-9db6-c496cedab268,},Annotations:map[string]string{io.kubernetes.container.hash: 1fd86ba1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1c57f4a374d74eb1448cb10d635af2105a36b5f1e974454c4460ad7319c5b5b,PodSandboxId:e494359e7189b7ff5c60a0a8a37c1739ca008f20e7601d83c1c236085703e846,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714946176946596272,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kbqkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2119b45-a792-4860-8906-6d4b422fa032,},Annotations:map[string]string{io.kubernetes.container.hash: 7fa514b1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ca286dc16d8881c75da53df18d0329888f64922bf73d2db385591f8ce0a85b7,PodSandboxId:2585b130ba2ab56b797cd8b935b0e57fc8445015582d0dce2fa8531d2e9b53f6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714946176832636523,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h7tbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11e1eeff-5441-44cd-94b0-b4a8b7773170,},Annotations:map[string]string{io.kubernetes.container.hash: 9d6250a4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88a7ed5f5366d4904c962d6a5989c8bafc8e05b1e16b34cde7a98a2134a8a5f6,PodSandboxId:cacf3ad15dd83fd41913544962727b985e406c539993518f94654225e72dad6c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714946176756685413,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cpdww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cd95fb4-395f-40b0-ac69-985877734928,},Annotations:map[string]
string{io.kubernetes.container.hash: 3601c664,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ea3da7bad03d86b9ef31bb9735cec11ef284df54d515a6f6906c24b8e4311c7,PodSandboxId:cbc5b8b78dcd217d1a72bf21ed4500dc59b38089cd9ad7ddd1b862c40bc24be4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714946176682655527,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec4a14d9-48ae-4f7b-9f1c-2d17443a9abe,},Annotations:map[string]string{io.ku
bernetes.container.hash: c0a01775,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03073d2772bd2a7f14478d4361a75d75e3f0fbcfe076e864d01d053b846bea19,PodSandboxId:16c19bc428099ffb84d9f8635ed9f6281c0122750bd94de6de1570e5460c31a8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714946173024867485,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbd856791294522fc51ff5b62e7cd54b,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fef37118d160fb7416289be0385bebb9522627a84ea0821a6e1a5fe5da2c813,PodSandboxId:5426c6040d9cb314d8240d3313c84587611dde745ceb6fde3b31462003ef9f42,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714946173001893254,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2e468251ab344e0f881c437c1f0a903,},Annotations:map[string]string{io.kubernetes.container.hash: 41c7532d,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08a22997b781ac30b4bd928c16f4a5e38f5337b76ca9899db71f29d8edfd2a0c,PodSandboxId:62f801040656ab5246f61c3ef5e73d3d6db78b22ea1898643e348aba7107e64d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714946172984520067,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf972160d8d32eacd1f5a47e70108580,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0156d27216fa4c488bc142152c48c26b8fe0f7dd51cd40d07ab7b86679487f2d,PodSandboxId:39d26eeac3507e62e15ba216a92d6758a717a483486c9e194fca690fbac95a5f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714946172880117774,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f6a79907fce204a362a8fa77bf50a9,},Annotations:map[string]string{io.kubernetes.container.hash: e5f44add,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5da8dc883b84b13a00d7ddd7caced9b04f1b4fd587ba4727f9ea222bbdc7448c,PodSandboxId:a6044c7de56cf44e67d7336bdc0191bc14846e84cce141a5a0395377b69ff60f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714945864848857026,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cl7hp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fc46928d-642e-418f-9db6-c496cedab268,},Annotations:map[string]string{io.kubernetes.container.hash: 1fd86ba1,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:848a28f73e60cfa07ad217666af7c053f5c8bede9e3ee3935d24e4522fa3ecc0,PodSandboxId:fee2569ec6aa75279f17fcb1566e3fe2781d53455fef40438622e00187a8fd77,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714945815673792748,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec4a14d9-48ae-4f7b-9f1c-2d17443a9abe,},Annotations:map[string]string{io.kubernetes.container.hash: c0a01775,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b21f2ab80afb53329996f80118878f1af7b09f155fee5dc009087a64169de32d,PodSandboxId:5fda04970b0cc64a8463850f5ee2636112964bc5ec454aa8de47571bf93d708b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714945814455833158,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h7tbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11e1eeff-5441-44cd-94b0-b4a8b7773170,},Annotations:map[string]string{io.kubernetes.container.hash: 9d6250a4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43ae3bcd41585aec8645b9074511ba7e2c6162386f12f31e554687b448fe2ad2,PodSandboxId:ef9db81f865f272643499a8e2fcac3272fadfac4ca783a2a3aa364192748e609,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714945812603553395,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kbqkb,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: e2119b45-a792-4860-8906-6d4b422fa032,},Annotations:map[string]string{io.kubernetes.container.hash: 7fa514b1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2014ff87bd1ebe119fdb57654420ef50b567e385a498d454505b7bd8e029c60b,PodSandboxId:0fa7490de9d77ae139e7eed838891222f792885e55a3ccc27b94b7285801ea50,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714945812248205273,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cpdww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cd95fb4-395f-40b0-ac69
-985877734928,},Annotations:map[string]string{io.kubernetes.container.hash: 3601c664,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1b5f166a5cf3df5211e9fab19839dd1f65f15d8402aa2dfe1f66c30164d4eed,PodSandboxId:af74c44c9922dd31f1a88db886d1cdae1b1b7e660b9b70f84e705f12e9ec515c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714945793129600792,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbd856791294522fc51ff5b62e7cd54b,}
,Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cd2dc1892eb7cbb28aef581e7c5692c56ba211f18f9af26dc1fe0d0dd8402dc,PodSandboxId:adf73ed15a3d661567c1445806a1d1df39751b1ae5a0a9f2f91a62e6e199221d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714945793156946994,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2e468251ab344e0f881c437c1f0a903,},Annotations:map[string]string{io.kubernetes.
container.hash: 41c7532d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0e5121525f07b44bfc271cc8fb255bbd6cb4d46c6a240e2acbb2ef8f79411e9,PodSandboxId:812d6350fe96a1738c16f9474b8987ce4310622dcd7e9de22f99483c4601fa24,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714945793051911024,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f6a79907fce204a362a8fa77bf50a9,},Annotations:map[string]string{io.kubernetes.container.hash:
e5f44add,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e409273ba65ef4d25df83acbee19bccd69eb814951a8daf5f0bfee075503ac12,PodSandboxId:40300c2b4d51c791fa276ce6a11f3e9a063b13d10c325dd0dbd1d3c4926fc92c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714945792995840064,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf972160d8d32eacd1f5a47e70108580,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2073f472-e7a4-407d-a6bf-857eaa5bb4b1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	f38ee383fdf32       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      54 seconds ago       Running             busybox                   1                   631e97f79659b       busybox-fc5497c4f-cl7hp
	d1c57f4a374d7       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               1                   e494359e7189b       kindnet-kbqkb
	3ca286dc16d88       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   2585b130ba2ab       coredns-7db6d8ff4d-h7tbh
	88a7ed5f5366d       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      About a minute ago   Running             kube-proxy                1                   cacf3ad15dd83       kube-proxy-cpdww
	7ea3da7bad03d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   cbc5b8b78dcd2       storage-provisioner
	03073d2772bd2       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      About a minute ago   Running             kube-scheduler            1                   16c19bc428099       kube-scheduler-multinode-019621
	4fef37118d160       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   5426c6040d9cb       etcd-multinode-019621
	08a22997b781a       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      About a minute ago   Running             kube-controller-manager   1                   62f801040656a       kube-controller-manager-multinode-019621
	0156d27216fa4       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      About a minute ago   Running             kube-apiserver            1                   39d26eeac3507       kube-apiserver-multinode-019621
	5da8dc883b84b       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   6 minutes ago        Exited              busybox                   0                   a6044c7de56cf       busybox-fc5497c4f-cl7hp
	848a28f73e60c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   fee2569ec6aa7       storage-provisioner
	b21f2ab80afb5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago        Exited              coredns                   0                   5fda04970b0cc       coredns-7db6d8ff4d-h7tbh
	43ae3bcd41585       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      7 minutes ago        Exited              kindnet-cni               0                   ef9db81f865f2       kindnet-kbqkb
	2014ff87bd1eb       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      7 minutes ago        Exited              kube-proxy                0                   0fa7490de9d77       kube-proxy-cpdww
	5cd2dc1892eb7       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago        Exited              etcd                      0                   adf73ed15a3d6       etcd-multinode-019621
	b1b5f166a5cf3       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      7 minutes ago        Exited              kube-scheduler            0                   af74c44c9922d       kube-scheduler-multinode-019621
	f0e5121525f07       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      7 minutes ago        Exited              kube-apiserver            0                   812d6350fe96a       kube-apiserver-multinode-019621
	e409273ba65ef       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      7 minutes ago        Exited              kube-controller-manager   0                   40300c2b4d51c       kube-controller-manager-multinode-019621
	
	
	==> coredns [3ca286dc16d8881c75da53df18d0329888f64922bf73d2db385591f8ce0a85b7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54751 - 3145 "HINFO IN 7127606931689558220.7006574501752443575. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020443773s
	
	
	==> coredns [b21f2ab80afb53329996f80118878f1af7b09f155fee5dc009087a64169de32d] <==
	[INFO] 10.244.1.2:48214 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001453962s
	[INFO] 10.244.1.2:41359 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000127209s
	[INFO] 10.244.1.2:43048 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000078756s
	[INFO] 10.244.1.2:55869 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001291146s
	[INFO] 10.244.1.2:40410 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00011286s
	[INFO] 10.244.1.2:38218 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000101575s
	[INFO] 10.244.1.2:54239 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000089288s
	[INFO] 10.244.0.3:51811 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142086s
	[INFO] 10.244.0.3:46046 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000172671s
	[INFO] 10.244.0.3:54744 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000113072s
	[INFO] 10.244.0.3:41169 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000043167s
	[INFO] 10.244.1.2:56458 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164591s
	[INFO] 10.244.1.2:55611 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00011122s
	[INFO] 10.244.1.2:58210 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000104363s
	[INFO] 10.244.1.2:37837 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000072275s
	[INFO] 10.244.0.3:41990 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121911s
	[INFO] 10.244.0.3:58160 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000184697s
	[INFO] 10.244.0.3:53311 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000276803s
	[INFO] 10.244.0.3:35251 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00013276s
	[INFO] 10.244.1.2:34759 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000290898s
	[INFO] 10.244.1.2:51120 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000177281s
	[INFO] 10.244.1.2:47270 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000166876s
	[INFO] 10.244.1.2:47064 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000111541s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-019621
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-019621
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=182cbbc99574885c654f8e32902368a71f76ddd3
	                    minikube.k8s.io/name=multinode-019621
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_05T21_49_59_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 05 May 2024 21:49:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-019621
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 05 May 2024 21:57:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 05 May 2024 21:56:16 +0000   Sun, 05 May 2024 21:49:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 05 May 2024 21:56:16 +0000   Sun, 05 May 2024 21:49:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 05 May 2024 21:56:16 +0000   Sun, 05 May 2024 21:49:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 05 May 2024 21:56:16 +0000   Sun, 05 May 2024 21:50:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.30
	  Hostname:    multinode-019621
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b4df3d406bb44e15a4bddf8a7d93deb5
	  System UUID:                b4df3d40-6bb4-4e15-a4bd-df8a7d93deb5
	  Boot ID:                    7bb3c348-b8ac-4623-b778-6e10b769905e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-cl7hp                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m43s
	  kube-system                 coredns-7db6d8ff4d-h7tbh                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m32s
	  kube-system                 etcd-multinode-019621                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m46s
	  kube-system                 kindnet-kbqkb                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m33s
	  kube-system                 kube-apiserver-multinode-019621             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m46s
	  kube-system                 kube-controller-manager-multinode-019621    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m46s
	  kube-system                 kube-proxy-cpdww                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m33s
	  kube-system                 kube-scheduler-multinode-019621             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m46s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 7m32s              kube-proxy       
	  Normal  Starting                 87s                kube-proxy       
	  Normal  NodeHasSufficientPID     7m46s              kubelet          Node multinode-019621 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m46s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m46s              kubelet          Node multinode-019621 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m46s              kubelet          Node multinode-019621 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 7m46s              kubelet          Starting kubelet.
	  Normal  RegisteredNode           7m34s              node-controller  Node multinode-019621 event: Registered Node multinode-019621 in Controller
	  Normal  NodeReady                7m31s              kubelet          Node multinode-019621 status is now: NodeReady
	  Normal  Starting                 92s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  92s (x8 over 92s)  kubelet          Node multinode-019621 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    92s (x8 over 92s)  kubelet          Node multinode-019621 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     92s (x7 over 92s)  kubelet          Node multinode-019621 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  92s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           76s                node-controller  Node multinode-019621 event: Registered Node multinode-019621 in Controller
	
	
	Name:               multinode-019621-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-019621-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=182cbbc99574885c654f8e32902368a71f76ddd3
	                    minikube.k8s.io/name=multinode-019621
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_05T21_56_59_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 05 May 2024 21:56:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-019621-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 05 May 2024 21:57:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 05 May 2024 21:57:29 +0000   Sun, 05 May 2024 21:56:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 05 May 2024 21:57:29 +0000   Sun, 05 May 2024 21:56:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 05 May 2024 21:57:29 +0000   Sun, 05 May 2024 21:56:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 05 May 2024 21:57:29 +0000   Sun, 05 May 2024 21:57:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.242
	  Hostname:    multinode-019621-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0f76a590973448cab36871d0ae884056
	  System UUID:                0f76a590-9734-48ca-b368-71d0ae884056
	  Boot ID:                    e2002c0e-5840-4247-b771-41a76f27395e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-58lzm    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         50s
	  kube-system                 kindnet-4d86k              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m55s
	  kube-system                 kube-proxy-fvqcb           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m50s                  kube-proxy       
	  Normal  Starting                 41s                    kube-proxy       
	  Normal  NodeHasNoDiskPressure    6m55s (x2 over 6m55s)  kubelet          Node multinode-019621-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m55s (x2 over 6m55s)  kubelet          Node multinode-019621-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m55s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m55s (x2 over 6m55s)  kubelet          Node multinode-019621-m02 status is now: NodeHasSufficientMemory
	  Normal  Starting                 6m55s                  kubelet          Starting kubelet.
	  Normal  NodeReady                6m45s                  kubelet          Node multinode-019621-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  45s (x2 over 46s)      kubelet          Node multinode-019621-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    45s (x2 over 46s)      kubelet          Node multinode-019621-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     45s (x2 over 46s)      kubelet          Node multinode-019621-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  45s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           41s                    node-controller  Node multinode-019621-m02 event: Registered Node multinode-019621-m02 in Controller
	  Normal  NodeReady                37s                    kubelet          Node multinode-019621-m02 status is now: NodeReady
	
	
	Name:               multinode-019621-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-019621-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=182cbbc99574885c654f8e32902368a71f76ddd3
	                    minikube.k8s.io/name=multinode-019621
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_05T21_57_32_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 05 May 2024 21:57:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-019621-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 05 May 2024 21:57:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 05 May 2024 21:57:41 +0000   Sun, 05 May 2024 21:57:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 05 May 2024 21:57:41 +0000   Sun, 05 May 2024 21:57:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 05 May 2024 21:57:41 +0000   Sun, 05 May 2024 21:57:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 05 May 2024 21:57:41 +0000   Sun, 05 May 2024 21:57:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.246
	  Hostname:    multinode-019621-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 de85bacdf13243eba499fc0d8cd7257e
	  System UUID:                de85bacd-f132-43eb-a499-fc0d8cd7257e
	  Boot ID:                    175c014f-4143-4aef-a48f-f159d03c29ec
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-8tzxc       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m6s
	  kube-system                 kube-proxy-j9cqt    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m1s                   kube-proxy  
	  Normal  Starting                 8s                     kube-proxy  
	  Normal  Starting                 5m19s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  6m6s (x2 over 6m6s)    kubelet     Node multinode-019621-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m6s (x2 over 6m6s)    kubelet     Node multinode-019621-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m6s (x2 over 6m6s)    kubelet     Node multinode-019621-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m6s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m57s                  kubelet     Node multinode-019621-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m24s (x2 over 5m24s)  kubelet     Node multinode-019621-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m24s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m24s (x2 over 5m24s)  kubelet     Node multinode-019621-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m24s (x2 over 5m24s)  kubelet     Node multinode-019621-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m15s                  kubelet     Node multinode-019621-m03 status is now: NodeReady
	  Normal  Starting                 13s                    kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  13s (x2 over 13s)      kubelet     Node multinode-019621-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13s (x2 over 13s)      kubelet     Node multinode-019621-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13s (x2 over 13s)      kubelet     Node multinode-019621-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s                     kubelet     Node multinode-019621-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.055833] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059481] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.185958] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.145941] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.282937] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.882997] systemd-fstab-generator[765]: Ignoring "noauto" option for root device
	[  +0.068390] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.588834] systemd-fstab-generator[950]: Ignoring "noauto" option for root device
	[  +0.660409] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.909630] systemd-fstab-generator[1279]: Ignoring "noauto" option for root device
	[  +0.094283] kauditd_printk_skb: 41 callbacks suppressed
	[May 5 21:50] kauditd_printk_skb: 21 callbacks suppressed
	[  +0.101601] systemd-fstab-generator[1514]: Ignoring "noauto" option for root device
	[May 5 21:51] kauditd_printk_skb: 84 callbacks suppressed
	[May 5 21:56] systemd-fstab-generator[2771]: Ignoring "noauto" option for root device
	[  +0.151412] systemd-fstab-generator[2784]: Ignoring "noauto" option for root device
	[  +0.192792] systemd-fstab-generator[2798]: Ignoring "noauto" option for root device
	[  +0.136857] systemd-fstab-generator[2810]: Ignoring "noauto" option for root device
	[  +0.310207] systemd-fstab-generator[2838]: Ignoring "noauto" option for root device
	[  +1.101670] systemd-fstab-generator[2936]: Ignoring "noauto" option for root device
	[  +2.268051] systemd-fstab-generator[3060]: Ignoring "noauto" option for root device
	[  +0.943417] kauditd_printk_skb: 154 callbacks suppressed
	[ +16.176838] kauditd_printk_skb: 62 callbacks suppressed
	[  +3.105310] systemd-fstab-generator[3876]: Ignoring "noauto" option for root device
	[ +18.235404] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [4fef37118d160fb7416289be0385bebb9522627a84ea0821a6e1a5fe5da2c813] <==
	{"level":"info","ts":"2024-05-05T21:56:13.689109Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-05T21:56:13.680441Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"404c942cebf80710 switched to configuration voters=(4633241037315770128)"}
	{"level":"info","ts":"2024-05-05T21:56:13.689858Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ae8b7a508f3fd394","local-member-id":"404c942cebf80710","added-peer-id":"404c942cebf80710","added-peer-peer-urls":["https://192.168.39.30:2380"]}
	{"level":"info","ts":"2024-05-05T21:56:13.69011Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-05T21:56:13.700082Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ae8b7a508f3fd394","local-member-id":"404c942cebf80710","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-05T21:56:13.703588Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-05T21:56:13.727837Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-05T21:56:13.729332Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"404c942cebf80710","initial-advertise-peer-urls":["https://192.168.39.30:2380"],"listen-peer-urls":["https://192.168.39.30:2380"],"advertise-client-urls":["https://192.168.39.30:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.30:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-05T21:56:13.733844Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-05T21:56:13.728151Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.30:2380"}
	{"level":"info","ts":"2024-05-05T21:56:13.746536Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.30:2380"}
	{"level":"info","ts":"2024-05-05T21:56:14.598624Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"404c942cebf80710 is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-05T21:56:14.598703Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"404c942cebf80710 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-05T21:56:14.598753Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"404c942cebf80710 received MsgPreVoteResp from 404c942cebf80710 at term 2"}
	{"level":"info","ts":"2024-05-05T21:56:14.598776Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"404c942cebf80710 became candidate at term 3"}
	{"level":"info","ts":"2024-05-05T21:56:14.598782Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"404c942cebf80710 received MsgVoteResp from 404c942cebf80710 at term 3"}
	{"level":"info","ts":"2024-05-05T21:56:14.598861Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"404c942cebf80710 became leader at term 3"}
	{"level":"info","ts":"2024-05-05T21:56:14.598872Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 404c942cebf80710 elected leader 404c942cebf80710 at term 3"}
	{"level":"info","ts":"2024-05-05T21:56:14.605995Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"404c942cebf80710","local-member-attributes":"{Name:multinode-019621 ClientURLs:[https://192.168.39.30:2379]}","request-path":"/0/members/404c942cebf80710/attributes","cluster-id":"ae8b7a508f3fd394","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-05T21:56:14.606061Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-05T21:56:14.60647Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-05T21:56:14.606492Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-05T21:56:14.606509Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-05T21:56:14.608561Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.30:2379"}
	{"level":"info","ts":"2024-05-05T21:56:14.608662Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [5cd2dc1892eb7cbb28aef581e7c5692c56ba211f18f9af26dc1fe0d0dd8402dc] <==
	{"level":"info","ts":"2024-05-05T21:51:38.285822Z","caller":"traceutil/trace.go:171","msg":"trace[1306098951] linearizableReadLoop","detail":"{readStateIndex:619; appliedIndex:618; }","duration":"135.989501ms","start":"2024-05-05T21:51:38.149803Z","end":"2024-05-05T21:51:38.285792Z","steps":["trace[1306098951] 'read index received'  (duration: 128.379785ms)","trace[1306098951] 'applied index is now lower than readState.Index'  (duration: 7.608728ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-05T21:51:38.286289Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.414159ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges/kube-system/\" range_end:\"/registry/limitranges/kube-system0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-05T21:51:38.286438Z","caller":"traceutil/trace.go:171","msg":"trace[1702837299] range","detail":"{range_begin:/registry/limitranges/kube-system/; range_end:/registry/limitranges/kube-system0; response_count:0; response_revision:587; }","duration":"136.584359ms","start":"2024-05-05T21:51:38.149778Z","end":"2024-05-05T21:51:38.286363Z","steps":["trace[1702837299] 'agreement among raft nodes before linearized reading'  (duration: 136.126175ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-05T21:51:38.286512Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"134.670428ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-019621-m03\" ","response":"range_response_count:1 size:2095"}
	{"level":"info","ts":"2024-05-05T21:51:38.28657Z","caller":"traceutil/trace.go:171","msg":"trace[145520545] range","detail":"{range_begin:/registry/minions/multinode-019621-m03; range_end:; response_count:1; response_revision:589; }","duration":"134.745395ms","start":"2024-05-05T21:51:38.151816Z","end":"2024-05-05T21:51:38.286561Z","steps":["trace[145520545] 'agreement among raft nodes before linearized reading'  (duration: 134.654946ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-05T21:51:38.286754Z","caller":"traceutil/trace.go:171","msg":"trace[1674215034] transaction","detail":"{read_only:false; response_revision:588; number_of_response:1; }","duration":"136.879635ms","start":"2024-05-05T21:51:38.149861Z","end":"2024-05-05T21:51:38.286741Z","steps":["trace[1674215034] 'process raft request'  (duration: 136.413469ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-05T21:51:38.286925Z","caller":"traceutil/trace.go:171","msg":"trace[1174492820] transaction","detail":"{read_only:false; number_of_response:1; response_revision:588; }","duration":"137.035118ms","start":"2024-05-05T21:51:38.149882Z","end":"2024-05-05T21:51:38.286917Z","steps":["trace[1174492820] 'process raft request'  (duration: 136.446283ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-05T21:51:38.287156Z","caller":"traceutil/trace.go:171","msg":"trace[1999143151] transaction","detail":"{read_only:false; response_revision:589; number_of_response:1; }","duration":"123.245258ms","start":"2024-05-05T21:51:38.163903Z","end":"2024-05-05T21:51:38.287148Z","steps":["trace[1999143151] 'process raft request'  (duration: 122.44443ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-05T21:51:43.346092Z","caller":"traceutil/trace.go:171","msg":"trace[927855184] transaction","detail":"{read_only:false; response_revision:624; number_of_response:1; }","duration":"241.900586ms","start":"2024-05-05T21:51:43.104145Z","end":"2024-05-05T21:51:43.346045Z","steps":["trace[927855184] 'process raft request'  (duration: 179.182139ms)","trace[927855184] 'compare'  (duration: 62.27827ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-05T21:51:43.707172Z","caller":"traceutil/trace.go:171","msg":"trace[1755155320] linearizableReadLoop","detail":"{readStateIndex:660; appliedIndex:659; }","duration":"346.183647ms","start":"2024-05-05T21:51:43.360967Z","end":"2024-05-05T21:51:43.707151Z","steps":["trace[1755155320] 'read index received'  (duration: 254.17349ms)","trace[1755155320] 'applied index is now lower than readState.Index'  (duration: 92.009215ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-05T21:51:43.707331Z","caller":"traceutil/trace.go:171","msg":"trace[68441250] transaction","detail":"{read_only:false; response_revision:625; number_of_response:1; }","duration":"355.683012ms","start":"2024-05-05T21:51:43.351633Z","end":"2024-05-05T21:51:43.707316Z","steps":["trace[68441250] 'process raft request'  (duration: 263.560753ms)","trace[68441250] 'compare'  (duration: 91.712784ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-05T21:51:43.707591Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-05T21:51:43.351617Z","time spent":"355.868846ms","remote":"127.0.0.1:46604","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2880,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/daemonsets/kube-system/kube-proxy\" mod_revision:600 > success:<request_put:<key:\"/registry/daemonsets/kube-system/kube-proxy\" value_size:2829 >> failure:<request_range:<key:\"/registry/daemonsets/kube-system/kube-proxy\" > >"}
	{"level":"warn","ts":"2024-05-05T21:51:43.707721Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"346.747963ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-019621-m03\" ","response":"range_response_count:1 size:2953"}
	{"level":"info","ts":"2024-05-05T21:51:43.707806Z","caller":"traceutil/trace.go:171","msg":"trace[2015094061] range","detail":"{range_begin:/registry/minions/multinode-019621-m03; range_end:; response_count:1; response_revision:625; }","duration":"346.850346ms","start":"2024-05-05T21:51:43.360945Z","end":"2024-05-05T21:51:43.707795Z","steps":["trace[2015094061] 'agreement among raft nodes before linearized reading'  (duration: 346.410921ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-05T21:51:43.707863Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-05T21:51:43.360932Z","time spent":"346.92034ms","remote":"127.0.0.1:46306","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":1,"response size":2976,"request content":"key:\"/registry/minions/multinode-019621-m03\" "}
	{"level":"info","ts":"2024-05-05T21:54:36.263717Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-05-05T21:54:36.26392Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-019621","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.30:2380"],"advertise-client-urls":["https://192.168.39.30:2379"]}
	{"level":"warn","ts":"2024-05-05T21:54:36.264046Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.30:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-05T21:54:36.264099Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.30:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-05T21:54:36.264192Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-05T21:54:36.264273Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-05-05T21:54:36.318779Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"404c942cebf80710","current-leader-member-id":"404c942cebf80710"}
	{"level":"info","ts":"2024-05-05T21:54:36.321484Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.30:2380"}
	{"level":"info","ts":"2024-05-05T21:54:36.321668Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.30:2380"}
	{"level":"info","ts":"2024-05-05T21:54:36.321709Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-019621","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.30:2380"],"advertise-client-urls":["https://192.168.39.30:2379"]}
	
	
	==> kernel <==
	 21:57:45 up 8 min,  0 users,  load average: 0.25, 0.25, 0.14
	Linux multinode-019621 5.10.207 #1 SMP Tue Apr 30 22:38:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [43ae3bcd41585aec8645b9074511ba7e2c6162386f12f31e554687b448fe2ad2] <==
	I0505 21:53:53.839129       1 main.go:250] Node multinode-019621-m03 has CIDR [10.244.3.0/24] 
	I0505 21:54:03.852205       1 main.go:223] Handling node with IPs: map[192.168.39.30:{}]
	I0505 21:54:03.852260       1 main.go:227] handling current node
	I0505 21:54:03.852271       1 main.go:223] Handling node with IPs: map[192.168.39.242:{}]
	I0505 21:54:03.852277       1 main.go:250] Node multinode-019621-m02 has CIDR [10.244.1.0/24] 
	I0505 21:54:03.852457       1 main.go:223] Handling node with IPs: map[192.168.39.246:{}]
	I0505 21:54:03.852495       1 main.go:250] Node multinode-019621-m03 has CIDR [10.244.3.0/24] 
	I0505 21:54:13.866962       1 main.go:223] Handling node with IPs: map[192.168.39.30:{}]
	I0505 21:54:13.867092       1 main.go:227] handling current node
	I0505 21:54:13.867136       1 main.go:223] Handling node with IPs: map[192.168.39.242:{}]
	I0505 21:54:13.867170       1 main.go:250] Node multinode-019621-m02 has CIDR [10.244.1.0/24] 
	I0505 21:54:13.867314       1 main.go:223] Handling node with IPs: map[192.168.39.246:{}]
	I0505 21:54:13.867334       1 main.go:250] Node multinode-019621-m03 has CIDR [10.244.3.0/24] 
	I0505 21:54:23.883586       1 main.go:223] Handling node with IPs: map[192.168.39.30:{}]
	I0505 21:54:23.883757       1 main.go:227] handling current node
	I0505 21:54:23.883793       1 main.go:223] Handling node with IPs: map[192.168.39.242:{}]
	I0505 21:54:23.883821       1 main.go:250] Node multinode-019621-m02 has CIDR [10.244.1.0/24] 
	I0505 21:54:23.884120       1 main.go:223] Handling node with IPs: map[192.168.39.246:{}]
	I0505 21:54:23.884211       1 main.go:250] Node multinode-019621-m03 has CIDR [10.244.3.0/24] 
	I0505 21:54:33.889531       1 main.go:223] Handling node with IPs: map[192.168.39.30:{}]
	I0505 21:54:33.889613       1 main.go:227] handling current node
	I0505 21:54:33.889635       1 main.go:223] Handling node with IPs: map[192.168.39.242:{}]
	I0505 21:54:33.889652       1 main.go:250] Node multinode-019621-m02 has CIDR [10.244.1.0/24] 
	I0505 21:54:33.889757       1 main.go:223] Handling node with IPs: map[192.168.39.246:{}]
	I0505 21:54:33.889777       1 main.go:250] Node multinode-019621-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [d1c57f4a374d74eb1448cb10d635af2105a36b5f1e974454c4460ad7319c5b5b] <==
	I0505 21:56:57.804842       1 main.go:250] Node multinode-019621-m03 has CIDR [10.244.3.0/24] 
	I0505 21:57:07.817760       1 main.go:223] Handling node with IPs: map[192.168.39.30:{}]
	I0505 21:57:07.817908       1 main.go:227] handling current node
	I0505 21:57:07.817937       1 main.go:223] Handling node with IPs: map[192.168.39.242:{}]
	I0505 21:57:07.817959       1 main.go:250] Node multinode-019621-m02 has CIDR [10.244.1.0/24] 
	I0505 21:57:07.818076       1 main.go:223] Handling node with IPs: map[192.168.39.246:{}]
	I0505 21:57:07.818104       1 main.go:250] Node multinode-019621-m03 has CIDR [10.244.3.0/24] 
	I0505 21:57:17.831160       1 main.go:223] Handling node with IPs: map[192.168.39.30:{}]
	I0505 21:57:17.831360       1 main.go:227] handling current node
	I0505 21:57:17.831485       1 main.go:223] Handling node with IPs: map[192.168.39.242:{}]
	I0505 21:57:17.831508       1 main.go:250] Node multinode-019621-m02 has CIDR [10.244.1.0/24] 
	I0505 21:57:17.831655       1 main.go:223] Handling node with IPs: map[192.168.39.246:{}]
	I0505 21:57:17.831677       1 main.go:250] Node multinode-019621-m03 has CIDR [10.244.3.0/24] 
	I0505 21:57:27.843669       1 main.go:223] Handling node with IPs: map[192.168.39.30:{}]
	I0505 21:57:27.843776       1 main.go:227] handling current node
	I0505 21:57:27.843796       1 main.go:223] Handling node with IPs: map[192.168.39.242:{}]
	I0505 21:57:27.843802       1 main.go:250] Node multinode-019621-m02 has CIDR [10.244.1.0/24] 
	I0505 21:57:27.843920       1 main.go:223] Handling node with IPs: map[192.168.39.246:{}]
	I0505 21:57:27.843957       1 main.go:250] Node multinode-019621-m03 has CIDR [10.244.3.0/24] 
	I0505 21:57:37.855005       1 main.go:223] Handling node with IPs: map[192.168.39.30:{}]
	I0505 21:57:37.855066       1 main.go:227] handling current node
	I0505 21:57:37.855080       1 main.go:223] Handling node with IPs: map[192.168.39.242:{}]
	I0505 21:57:37.855096       1 main.go:250] Node multinode-019621-m02 has CIDR [10.244.1.0/24] 
	I0505 21:57:37.856990       1 main.go:223] Handling node with IPs: map[192.168.39.246:{}]
	I0505 21:57:37.857047       1 main.go:250] Node multinode-019621-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [0156d27216fa4c488bc142152c48c26b8fe0f7dd51cd40d07ab7b86679487f2d] <==
	I0505 21:56:16.019987       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0505 21:56:16.020070       1 shared_informer.go:320] Caches are synced for configmaps
	I0505 21:56:16.021686       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0505 21:56:16.021727       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0505 21:56:16.021849       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0505 21:56:16.029152       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0505 21:56:16.030672       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0505 21:56:16.030756       1 aggregator.go:165] initial CRD sync complete...
	I0505 21:56:16.030780       1 autoregister_controller.go:141] Starting autoregister controller
	I0505 21:56:16.030802       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0505 21:56:16.030824       1 cache.go:39] Caches are synced for autoregister controller
	I0505 21:56:16.034447       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0505 21:56:16.055478       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0505 21:56:16.077593       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0505 21:56:16.099643       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0505 21:56:16.099718       1 policy_source.go:224] refreshing policies
	I0505 21:56:16.122741       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0505 21:56:16.924891       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0505 21:56:17.848103       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0505 21:56:17.969505       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0505 21:56:17.980985       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0505 21:56:18.061789       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0505 21:56:18.072181       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0505 21:56:28.940038       1 controller.go:615] quota admission added evaluator for: endpoints
	I0505 21:56:29.031449       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [f0e5121525f07b44bfc271cc8fb255bbd6cb4d46c6a240e2acbb2ef8f79411e9] <==
	W0505 21:54:36.299255       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:54:36.299299       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:54:36.299324       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:54:36.299353       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:54:36.299508       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:54:36.299549       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:54:36.299580       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:54:36.299611       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:54:36.299643       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:54:36.299668       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:54:36.299692       1 logging.go:59] [core] [Channel #10 SubChannel #11] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:54:36.299718       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:54:36.299746       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:54:36.299771       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:54:36.299801       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:54:36.299832       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:54:36.299856       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:54:36.299886       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:54:36.299931       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:54:36.299955       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:54:36.299977       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:54:36.300003       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:54:36.300026       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:54:36.300072       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [08a22997b781ac30b4bd928c16f4a5e38f5337b76ca9899db71f29d8edfd2a0c] <==
	I0505 21:56:29.368443       1 shared_informer.go:320] Caches are synced for garbage collector
	I0505 21:56:29.368495       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0505 21:56:29.373247       1 shared_informer.go:320] Caches are synced for garbage collector
	I0505 21:56:54.366130       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="778.183µs"
	I0505 21:56:54.598572       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.564924ms"
	I0505 21:56:54.610773       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.698685ms"
	I0505 21:56:54.611625       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="130.989µs"
	I0505 21:56:59.092209       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-019621-m02\" does not exist"
	I0505 21:56:59.108770       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-019621-m02" podCIDRs=["10.244.1.0/24"]
	I0505 21:57:01.007815       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="75.791µs"
	I0505 21:57:01.021962       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.023µs"
	I0505 21:57:01.030630       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.172µs"
	I0505 21:57:01.051953       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.578µs"
	I0505 21:57:01.060979       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.413µs"
	I0505 21:57:01.065134       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.403µs"
	I0505 21:57:07.976483       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-019621-m02"
	I0505 21:57:08.001847       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.835µs"
	I0505 21:57:08.018782       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.992µs"
	I0505 21:57:11.534885       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.756575ms"
	I0505 21:57:11.535073       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="135.563µs"
	I0505 21:57:30.567542       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-019621-m02"
	I0505 21:57:31.836710       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-019621-m03\" does not exist"
	I0505 21:57:31.836769       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-019621-m02"
	I0505 21:57:31.862879       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-019621-m03" podCIDRs=["10.244.2.0/24"]
	I0505 21:57:41.318601       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-019621-m02"
	
	
	==> kube-controller-manager [e409273ba65ef4d25df83acbee19bccd69eb814951a8daf5f0bfee075503ac12] <==
	I0505 21:50:49.648862       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-019621-m02\" does not exist"
	I0505 21:50:49.677709       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-019621-m02" podCIDRs=["10.244.1.0/24"]
	I0505 21:50:50.989701       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-019621-m02"
	I0505 21:50:59.027834       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-019621-m02"
	I0505 21:51:01.511030       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="70.120069ms"
	I0505 21:51:01.529626       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.356541ms"
	I0505 21:51:01.529959       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="119.614µs"
	I0505 21:51:01.550449       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="132.061µs"
	I0505 21:51:04.989068       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.808959ms"
	I0505 21:51:04.989351       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="117.394µs"
	I0505 21:51:05.542172       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.078903ms"
	I0505 21:51:05.542325       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.995µs"
	I0505 21:51:38.143921       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-019621-m03\" does not exist"
	I0505 21:51:38.144670       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-019621-m02"
	I0505 21:51:38.295672       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-019621-m03" podCIDRs=["10.244.2.0/24"]
	I0505 21:51:41.010806       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-019621-m03"
	I0505 21:51:47.897798       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-019621-m02"
	I0505 21:52:19.384592       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-019621-m02"
	I0505 21:52:20.391564       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-019621-m02"
	I0505 21:52:20.391738       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-019621-m03\" does not exist"
	I0505 21:52:20.403283       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-019621-m03" podCIDRs=["10.244.3.0/24"]
	I0505 21:52:29.569246       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-019621-m02"
	I0505 21:53:11.060888       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-019621-m02"
	I0505 21:53:16.164841       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.716442ms"
	I0505 21:53:16.164986       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.511µs"
	
	
	==> kube-proxy [2014ff87bd1ebe119fdb57654420ef50b567e385a498d454505b7bd8e029c60b] <==
	I0505 21:50:12.572524       1 server_linux.go:69] "Using iptables proxy"
	I0505 21:50:12.581624       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.30"]
	I0505 21:50:12.675852       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0505 21:50:12.675941       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0505 21:50:12.675957       1 server_linux.go:165] "Using iptables Proxier"
	I0505 21:50:12.690521       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0505 21:50:12.690746       1 server.go:872] "Version info" version="v1.30.0"
	I0505 21:50:12.690759       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0505 21:50:12.692250       1 config.go:192] "Starting service config controller"
	I0505 21:50:12.692264       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0505 21:50:12.692333       1 config.go:101] "Starting endpoint slice config controller"
	I0505 21:50:12.692337       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0505 21:50:12.699810       1 config.go:319] "Starting node config controller"
	I0505 21:50:12.700654       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0505 21:50:12.793082       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0505 21:50:12.793115       1 shared_informer.go:320] Caches are synced for service config
	I0505 21:50:12.801124       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [88a7ed5f5366d4904c962d6a5989c8bafc8e05b1e16b34cde7a98a2134a8a5f6] <==
	I0505 21:56:17.112971       1 server_linux.go:69] "Using iptables proxy"
	I0505 21:56:17.128208       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.30"]
	I0505 21:56:17.202836       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0505 21:56:17.202901       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0505 21:56:17.202919       1 server_linux.go:165] "Using iptables Proxier"
	I0505 21:56:17.205780       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0505 21:56:17.205958       1 server.go:872] "Version info" version="v1.30.0"
	I0505 21:56:17.206000       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0505 21:56:17.207641       1 config.go:192] "Starting service config controller"
	I0505 21:56:17.207683       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0505 21:56:17.207723       1 config.go:101] "Starting endpoint slice config controller"
	I0505 21:56:17.207727       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0505 21:56:17.208033       1 config.go:319] "Starting node config controller"
	I0505 21:56:17.208077       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0505 21:56:17.308685       1 shared_informer.go:320] Caches are synced for node config
	I0505 21:56:17.308739       1 shared_informer.go:320] Caches are synced for service config
	I0505 21:56:17.308769       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [03073d2772bd2a7f14478d4361a75d75e3f0fbcfe076e864d01d053b846bea19] <==
	I0505 21:56:14.071462       1 serving.go:380] Generated self-signed cert in-memory
	W0505 21:56:16.003337       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0505 21:56:16.003537       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0505 21:56:16.003654       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0505 21:56:16.003688       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0505 21:56:16.039628       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0505 21:56:16.039694       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0505 21:56:16.043195       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0505 21:56:16.043962       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0505 21:56:16.044159       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0505 21:56:16.043988       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0505 21:56:16.144922       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [b1b5f166a5cf3df5211e9fab19839dd1f65f15d8402aa2dfe1f66c30164d4eed] <==
	W0505 21:49:55.810803       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0505 21:49:55.810812       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0505 21:49:55.810937       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0505 21:49:55.810982       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0505 21:49:55.811027       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0505 21:49:55.811037       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0505 21:49:55.811113       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0505 21:49:55.811150       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0505 21:49:55.811213       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0505 21:49:55.811251       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0505 21:49:55.811297       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0505 21:49:55.811308       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0505 21:49:56.719769       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0505 21:49:56.719800       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0505 21:49:56.747106       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0505 21:49:56.747176       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0505 21:49:57.007806       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0505 21:49:57.007864       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0505 21:49:57.008754       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0505 21:49:57.008811       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0505 21:49:57.051663       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0505 21:49:57.051749       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0505 21:49:58.893888       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0505 21:54:36.257148       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0505 21:54:36.257902       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	May 05 21:56:13 multinode-019621 kubelet[3067]: E0505 21:56:13.036028    3067 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.30:8443: connect: connection refused
	May 05 21:56:13 multinode-019621 kubelet[3067]: I0505 21:56:13.682097    3067 kubelet_node_status.go:73] "Attempting to register node" node="multinode-019621"
	May 05 21:56:16 multinode-019621 kubelet[3067]: I0505 21:56:16.151057    3067 apiserver.go:52] "Watching apiserver"
	May 05 21:56:16 multinode-019621 kubelet[3067]: I0505 21:56:16.155670    3067 topology_manager.go:215] "Topology Admit Handler" podUID="e2119b45-a792-4860-8906-6d4b422fa032" podNamespace="kube-system" podName="kindnet-kbqkb"
	May 05 21:56:16 multinode-019621 kubelet[3067]: I0505 21:56:16.156088    3067 topology_manager.go:215] "Topology Admit Handler" podUID="6cd95fb4-395f-40b0-ac69-985877734928" podNamespace="kube-system" podName="kube-proxy-cpdww"
	May 05 21:56:16 multinode-019621 kubelet[3067]: I0505 21:56:16.156459    3067 topology_manager.go:215] "Topology Admit Handler" podUID="11e1eeff-5441-44cd-94b0-b4a8b7773170" podNamespace="kube-system" podName="coredns-7db6d8ff4d-h7tbh"
	May 05 21:56:16 multinode-019621 kubelet[3067]: I0505 21:56:16.156742    3067 topology_manager.go:215] "Topology Admit Handler" podUID="ec4a14d9-48ae-4f7b-9f1c-2d17443a9abe" podNamespace="kube-system" podName="storage-provisioner"
	May 05 21:56:16 multinode-019621 kubelet[3067]: I0505 21:56:16.157132    3067 topology_manager.go:215] "Topology Admit Handler" podUID="fc46928d-642e-418f-9db6-c496cedab268" podNamespace="default" podName="busybox-fc5497c4f-cl7hp"
	May 05 21:56:16 multinode-019621 kubelet[3067]: I0505 21:56:16.172445    3067 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	May 05 21:56:16 multinode-019621 kubelet[3067]: I0505 21:56:16.192777    3067 kubelet_node_status.go:112] "Node was previously registered" node="multinode-019621"
	May 05 21:56:16 multinode-019621 kubelet[3067]: I0505 21:56:16.192958    3067 kubelet_node_status.go:76] "Successfully registered node" node="multinode-019621"
	May 05 21:56:16 multinode-019621 kubelet[3067]: I0505 21:56:16.195550    3067 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	May 05 21:56:16 multinode-019621 kubelet[3067]: I0505 21:56:16.197878    3067 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	May 05 21:56:16 multinode-019621 kubelet[3067]: I0505 21:56:16.216327    3067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/e2119b45-a792-4860-8906-6d4b422fa032-cni-cfg\") pod \"kindnet-kbqkb\" (UID: \"e2119b45-a792-4860-8906-6d4b422fa032\") " pod="kube-system/kindnet-kbqkb"
	May 05 21:56:16 multinode-019621 kubelet[3067]: I0505 21:56:16.216360    3067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e2119b45-a792-4860-8906-6d4b422fa032-xtables-lock\") pod \"kindnet-kbqkb\" (UID: \"e2119b45-a792-4860-8906-6d4b422fa032\") " pod="kube-system/kindnet-kbqkb"
	May 05 21:56:16 multinode-019621 kubelet[3067]: I0505 21:56:16.216483    3067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ec4a14d9-48ae-4f7b-9f1c-2d17443a9abe-tmp\") pod \"storage-provisioner\" (UID: \"ec4a14d9-48ae-4f7b-9f1c-2d17443a9abe\") " pod="kube-system/storage-provisioner"
	May 05 21:56:16 multinode-019621 kubelet[3067]: I0505 21:56:16.216512    3067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e2119b45-a792-4860-8906-6d4b422fa032-lib-modules\") pod \"kindnet-kbqkb\" (UID: \"e2119b45-a792-4860-8906-6d4b422fa032\") " pod="kube-system/kindnet-kbqkb"
	May 05 21:56:16 multinode-019621 kubelet[3067]: I0505 21:56:16.216562    3067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6cd95fb4-395f-40b0-ac69-985877734928-xtables-lock\") pod \"kube-proxy-cpdww\" (UID: \"6cd95fb4-395f-40b0-ac69-985877734928\") " pod="kube-system/kube-proxy-cpdww"
	May 05 21:56:16 multinode-019621 kubelet[3067]: I0505 21:56:16.216579    3067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6cd95fb4-395f-40b0-ac69-985877734928-lib-modules\") pod \"kube-proxy-cpdww\" (UID: \"6cd95fb4-395f-40b0-ac69-985877734928\") " pod="kube-system/kube-proxy-cpdww"
	May 05 21:56:23 multinode-019621 kubelet[3067]: I0505 21:56:23.703582    3067 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	May 05 21:57:12 multinode-019621 kubelet[3067]: E0505 21:57:12.291026    3067 iptables.go:577] "Could not set up iptables canary" err=<
	May 05 21:57:12 multinode-019621 kubelet[3067]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 05 21:57:12 multinode-019621 kubelet[3067]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 05 21:57:12 multinode-019621 kubelet[3067]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 05 21:57:12 multinode-019621 kubelet[3067]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0505 21:57:44.000442   49852 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18602-11466/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-019621 -n multinode-019621
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-019621 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (313.56s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019621 stop
E0505 21:59:31.830703   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/functional-273789/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-019621 stop: exit status 82 (2m0.492191077s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-019621-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-019621 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019621 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-019621 status: exit status 3 (18.751882222s)

                                                
                                                
-- stdout --
	multinode-019621
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-019621-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0505 22:00:07.771823   50522 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.242:22: connect: no route to host
	E0505 22:00:07.771878   50522 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.242:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-019621 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-019621 -n multinode-019621
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019621 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-019621 logs -n 25: (1.666387299s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-019621 ssh -n                                                                 | multinode-019621 | jenkins | v1.33.0 | 05 May 24 21:51 UTC | 05 May 24 21:51 UTC |
	|         | multinode-019621-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-019621 cp multinode-019621-m02:/home/docker/cp-test.txt                       | multinode-019621 | jenkins | v1.33.0 | 05 May 24 21:51 UTC | 05 May 24 21:51 UTC |
	|         | multinode-019621:/home/docker/cp-test_multinode-019621-m02_multinode-019621.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-019621 ssh -n                                                                 | multinode-019621 | jenkins | v1.33.0 | 05 May 24 21:51 UTC | 05 May 24 21:51 UTC |
	|         | multinode-019621-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-019621 ssh -n multinode-019621 sudo cat                                       | multinode-019621 | jenkins | v1.33.0 | 05 May 24 21:51 UTC | 05 May 24 21:51 UTC |
	|         | /home/docker/cp-test_multinode-019621-m02_multinode-019621.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-019621 cp multinode-019621-m02:/home/docker/cp-test.txt                       | multinode-019621 | jenkins | v1.33.0 | 05 May 24 21:51 UTC | 05 May 24 21:51 UTC |
	|         | multinode-019621-m03:/home/docker/cp-test_multinode-019621-m02_multinode-019621-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-019621 ssh -n                                                                 | multinode-019621 | jenkins | v1.33.0 | 05 May 24 21:51 UTC | 05 May 24 21:51 UTC |
	|         | multinode-019621-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-019621 ssh -n multinode-019621-m03 sudo cat                                   | multinode-019621 | jenkins | v1.33.0 | 05 May 24 21:51 UTC | 05 May 24 21:51 UTC |
	|         | /home/docker/cp-test_multinode-019621-m02_multinode-019621-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-019621 cp testdata/cp-test.txt                                                | multinode-019621 | jenkins | v1.33.0 | 05 May 24 21:51 UTC | 05 May 24 21:51 UTC |
	|         | multinode-019621-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-019621 ssh -n                                                                 | multinode-019621 | jenkins | v1.33.0 | 05 May 24 21:51 UTC | 05 May 24 21:51 UTC |
	|         | multinode-019621-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-019621 cp multinode-019621-m03:/home/docker/cp-test.txt                       | multinode-019621 | jenkins | v1.33.0 | 05 May 24 21:51 UTC | 05 May 24 21:51 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile971504099/001/cp-test_multinode-019621-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-019621 ssh -n                                                                 | multinode-019621 | jenkins | v1.33.0 | 05 May 24 21:51 UTC | 05 May 24 21:51 UTC |
	|         | multinode-019621-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-019621 cp multinode-019621-m03:/home/docker/cp-test.txt                       | multinode-019621 | jenkins | v1.33.0 | 05 May 24 21:51 UTC | 05 May 24 21:51 UTC |
	|         | multinode-019621:/home/docker/cp-test_multinode-019621-m03_multinode-019621.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-019621 ssh -n                                                                 | multinode-019621 | jenkins | v1.33.0 | 05 May 24 21:51 UTC | 05 May 24 21:51 UTC |
	|         | multinode-019621-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-019621 ssh -n multinode-019621 sudo cat                                       | multinode-019621 | jenkins | v1.33.0 | 05 May 24 21:51 UTC | 05 May 24 21:51 UTC |
	|         | /home/docker/cp-test_multinode-019621-m03_multinode-019621.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-019621 cp multinode-019621-m03:/home/docker/cp-test.txt                       | multinode-019621 | jenkins | v1.33.0 | 05 May 24 21:51 UTC | 05 May 24 21:51 UTC |
	|         | multinode-019621-m02:/home/docker/cp-test_multinode-019621-m03_multinode-019621-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-019621 ssh -n                                                                 | multinode-019621 | jenkins | v1.33.0 | 05 May 24 21:51 UTC | 05 May 24 21:51 UTC |
	|         | multinode-019621-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-019621 ssh -n multinode-019621-m02 sudo cat                                   | multinode-019621 | jenkins | v1.33.0 | 05 May 24 21:51 UTC | 05 May 24 21:51 UTC |
	|         | /home/docker/cp-test_multinode-019621-m03_multinode-019621-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-019621 node stop m03                                                          | multinode-019621 | jenkins | v1.33.0 | 05 May 24 21:51 UTC | 05 May 24 21:52 UTC |
	| node    | multinode-019621 node start                                                             | multinode-019621 | jenkins | v1.33.0 | 05 May 24 21:52 UTC | 05 May 24 21:52 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-019621                                                                | multinode-019621 | jenkins | v1.33.0 | 05 May 24 21:52 UTC |                     |
	| stop    | -p multinode-019621                                                                     | multinode-019621 | jenkins | v1.33.0 | 05 May 24 21:52 UTC |                     |
	| start   | -p multinode-019621                                                                     | multinode-019621 | jenkins | v1.33.0 | 05 May 24 21:54 UTC | 05 May 24 21:57 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-019621                                                                | multinode-019621 | jenkins | v1.33.0 | 05 May 24 21:57 UTC |                     |
	| node    | multinode-019621 node delete                                                            | multinode-019621 | jenkins | v1.33.0 | 05 May 24 21:57 UTC | 05 May 24 21:57 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-019621 stop                                                                   | multinode-019621 | jenkins | v1.33.0 | 05 May 24 21:57 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/05 21:54:35
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0505 21:54:35.365524   48764 out.go:291] Setting OutFile to fd 1 ...
	I0505 21:54:35.365789   48764 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 21:54:35.365800   48764 out.go:304] Setting ErrFile to fd 2...
	I0505 21:54:35.365804   48764 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 21:54:35.365983   48764 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18602-11466/.minikube/bin
	I0505 21:54:35.366565   48764 out.go:298] Setting JSON to false
	I0505 21:54:35.367440   48764 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5822,"bootTime":1714940253,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0505 21:54:35.367520   48764 start.go:139] virtualization: kvm guest
	I0505 21:54:35.370265   48764 out.go:177] * [multinode-019621] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0505 21:54:35.371936   48764 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 21:54:35.371944   48764 notify.go:220] Checking for updates...
	I0505 21:54:35.373550   48764 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 21:54:35.375278   48764 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18602-11466/kubeconfig
	I0505 21:54:35.377007   48764 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18602-11466/.minikube
	I0505 21:54:35.378385   48764 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0505 21:54:35.379805   48764 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 21:54:35.382157   48764 config.go:182] Loaded profile config "multinode-019621": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 21:54:35.382369   48764 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 21:54:35.383439   48764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:54:35.383533   48764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:54:35.398745   48764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32937
	I0505 21:54:35.399202   48764 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:54:35.399755   48764 main.go:141] libmachine: Using API Version  1
	I0505 21:54:35.399780   48764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:54:35.400061   48764 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:54:35.400214   48764 main.go:141] libmachine: (multinode-019621) Calling .DriverName
	I0505 21:54:35.435828   48764 out.go:177] * Using the kvm2 driver based on existing profile
	I0505 21:54:35.437234   48764 start.go:297] selected driver: kvm2
	I0505 21:54:35.437262   48764 start.go:901] validating driver "kvm2" against &{Name:multinode-019621 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.0 ClusterName:multinode-019621 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.30 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.242 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.246 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 21:54:35.437442   48764 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 21:54:35.437888   48764 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 21:54:35.437982   48764 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18602-11466/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0505 21:54:35.452901   48764 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0505 21:54:35.453617   48764 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0505 21:54:35.453683   48764 cni.go:84] Creating CNI manager for ""
	I0505 21:54:35.453699   48764 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0505 21:54:35.453771   48764 start.go:340] cluster config:
	{Name:multinode-019621 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-019621 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.30 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.242 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.246 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 21:54:35.453926   48764 iso.go:125] acquiring lock: {Name:mk05a37c5afd8d748706c016c1665e3846f7161e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 21:54:35.455814   48764 out.go:177] * Starting "multinode-019621" primary control-plane node in "multinode-019621" cluster
	I0505 21:54:35.457279   48764 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0505 21:54:35.457332   48764 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18602-11466/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0505 21:54:35.457343   48764 cache.go:56] Caching tarball of preloaded images
	I0505 21:54:35.457446   48764 preload.go:173] Found /home/jenkins/minikube-integration/18602-11466/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0505 21:54:35.457460   48764 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0505 21:54:35.457614   48764 profile.go:143] Saving config to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/multinode-019621/config.json ...
	I0505 21:54:35.457854   48764 start.go:360] acquireMachinesLock for multinode-019621: {Name:mk7f653ea73351d572a4896c6b37d2e7aa44b4ac Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 21:54:35.457920   48764 start.go:364] duration metric: took 44.275µs to acquireMachinesLock for "multinode-019621"
	I0505 21:54:35.457942   48764 start.go:96] Skipping create...Using existing machine configuration
	I0505 21:54:35.457951   48764 fix.go:54] fixHost starting: 
	I0505 21:54:35.458238   48764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:54:35.458285   48764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:54:35.472999   48764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35003
	I0505 21:54:35.473441   48764 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:54:35.473848   48764 main.go:141] libmachine: Using API Version  1
	I0505 21:54:35.473869   48764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:54:35.474300   48764 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:54:35.474496   48764 main.go:141] libmachine: (multinode-019621) Calling .DriverName
	I0505 21:54:35.474668   48764 main.go:141] libmachine: (multinode-019621) Calling .GetState
	I0505 21:54:35.476278   48764 fix.go:112] recreateIfNeeded on multinode-019621: state=Running err=<nil>
	W0505 21:54:35.476297   48764 fix.go:138] unexpected machine state, will restart: <nil>
	I0505 21:54:35.479165   48764 out.go:177] * Updating the running kvm2 "multinode-019621" VM ...
	I0505 21:54:35.480383   48764 machine.go:94] provisionDockerMachine start ...
	I0505 21:54:35.480400   48764 main.go:141] libmachine: (multinode-019621) Calling .DriverName
	I0505 21:54:35.480590   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHHostname
	I0505 21:54:35.483355   48764 main.go:141] libmachine: (multinode-019621) DBG | domain multinode-019621 has defined MAC address 52:54:00:ac:32:e4 in network mk-multinode-019621
	I0505 21:54:35.483852   48764 main.go:141] libmachine: (multinode-019621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:32:e4", ip: ""} in network mk-multinode-019621: {Iface:virbr1 ExpiryTime:2024-05-05 22:49:31 +0000 UTC Type:0 Mac:52:54:00:ac:32:e4 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:multinode-019621 Clientid:01:52:54:00:ac:32:e4}
	I0505 21:54:35.483885   48764 main.go:141] libmachine: (multinode-019621) DBG | domain multinode-019621 has defined IP address 192.168.39.30 and MAC address 52:54:00:ac:32:e4 in network mk-multinode-019621
	I0505 21:54:35.484031   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHPort
	I0505 21:54:35.484218   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHKeyPath
	I0505 21:54:35.484402   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHKeyPath
	I0505 21:54:35.484629   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHUsername
	I0505 21:54:35.484825   48764 main.go:141] libmachine: Using SSH client type: native
	I0505 21:54:35.485004   48764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.30 22 <nil> <nil>}
	I0505 21:54:35.485016   48764 main.go:141] libmachine: About to run SSH command:
	hostname
	I0505 21:54:35.602100   48764 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-019621
	
	I0505 21:54:35.602139   48764 main.go:141] libmachine: (multinode-019621) Calling .GetMachineName
	I0505 21:54:35.602372   48764 buildroot.go:166] provisioning hostname "multinode-019621"
	I0505 21:54:35.602393   48764 main.go:141] libmachine: (multinode-019621) Calling .GetMachineName
	I0505 21:54:35.602582   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHHostname
	I0505 21:54:35.604950   48764 main.go:141] libmachine: (multinode-019621) DBG | domain multinode-019621 has defined MAC address 52:54:00:ac:32:e4 in network mk-multinode-019621
	I0505 21:54:35.605336   48764 main.go:141] libmachine: (multinode-019621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:32:e4", ip: ""} in network mk-multinode-019621: {Iface:virbr1 ExpiryTime:2024-05-05 22:49:31 +0000 UTC Type:0 Mac:52:54:00:ac:32:e4 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:multinode-019621 Clientid:01:52:54:00:ac:32:e4}
	I0505 21:54:35.605365   48764 main.go:141] libmachine: (multinode-019621) DBG | domain multinode-019621 has defined IP address 192.168.39.30 and MAC address 52:54:00:ac:32:e4 in network mk-multinode-019621
	I0505 21:54:35.605553   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHPort
	I0505 21:54:35.605699   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHKeyPath
	I0505 21:54:35.605852   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHKeyPath
	I0505 21:54:35.605999   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHUsername
	I0505 21:54:35.606157   48764 main.go:141] libmachine: Using SSH client type: native
	I0505 21:54:35.606363   48764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.30 22 <nil> <nil>}
	I0505 21:54:35.606388   48764 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-019621 && echo "multinode-019621" | sudo tee /etc/hostname
	I0505 21:54:35.736303   48764 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-019621
	
	I0505 21:54:35.736335   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHHostname
	I0505 21:54:35.739143   48764 main.go:141] libmachine: (multinode-019621) DBG | domain multinode-019621 has defined MAC address 52:54:00:ac:32:e4 in network mk-multinode-019621
	I0505 21:54:35.739579   48764 main.go:141] libmachine: (multinode-019621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:32:e4", ip: ""} in network mk-multinode-019621: {Iface:virbr1 ExpiryTime:2024-05-05 22:49:31 +0000 UTC Type:0 Mac:52:54:00:ac:32:e4 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:multinode-019621 Clientid:01:52:54:00:ac:32:e4}
	I0505 21:54:35.739614   48764 main.go:141] libmachine: (multinode-019621) DBG | domain multinode-019621 has defined IP address 192.168.39.30 and MAC address 52:54:00:ac:32:e4 in network mk-multinode-019621
	I0505 21:54:35.739765   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHPort
	I0505 21:54:35.739977   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHKeyPath
	I0505 21:54:35.740134   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHKeyPath
	I0505 21:54:35.740307   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHUsername
	I0505 21:54:35.740452   48764 main.go:141] libmachine: Using SSH client type: native
	I0505 21:54:35.740642   48764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.30 22 <nil> <nil>}
	I0505 21:54:35.740667   48764 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-019621' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-019621/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-019621' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0505 21:54:35.852981   48764 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0505 21:54:35.853012   48764 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18602-11466/.minikube CaCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18602-11466/.minikube}
	I0505 21:54:35.853051   48764 buildroot.go:174] setting up certificates
	I0505 21:54:35.853063   48764 provision.go:84] configureAuth start
	I0505 21:54:35.853078   48764 main.go:141] libmachine: (multinode-019621) Calling .GetMachineName
	I0505 21:54:35.853367   48764 main.go:141] libmachine: (multinode-019621) Calling .GetIP
	I0505 21:54:35.856169   48764 main.go:141] libmachine: (multinode-019621) DBG | domain multinode-019621 has defined MAC address 52:54:00:ac:32:e4 in network mk-multinode-019621
	I0505 21:54:35.856523   48764 main.go:141] libmachine: (multinode-019621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:32:e4", ip: ""} in network mk-multinode-019621: {Iface:virbr1 ExpiryTime:2024-05-05 22:49:31 +0000 UTC Type:0 Mac:52:54:00:ac:32:e4 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:multinode-019621 Clientid:01:52:54:00:ac:32:e4}
	I0505 21:54:35.856548   48764 main.go:141] libmachine: (multinode-019621) DBG | domain multinode-019621 has defined IP address 192.168.39.30 and MAC address 52:54:00:ac:32:e4 in network mk-multinode-019621
	I0505 21:54:35.856730   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHHostname
	I0505 21:54:35.858965   48764 main.go:141] libmachine: (multinode-019621) DBG | domain multinode-019621 has defined MAC address 52:54:00:ac:32:e4 in network mk-multinode-019621
	I0505 21:54:35.859458   48764 main.go:141] libmachine: (multinode-019621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:32:e4", ip: ""} in network mk-multinode-019621: {Iface:virbr1 ExpiryTime:2024-05-05 22:49:31 +0000 UTC Type:0 Mac:52:54:00:ac:32:e4 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:multinode-019621 Clientid:01:52:54:00:ac:32:e4}
	I0505 21:54:35.859501   48764 main.go:141] libmachine: (multinode-019621) DBG | domain multinode-019621 has defined IP address 192.168.39.30 and MAC address 52:54:00:ac:32:e4 in network mk-multinode-019621
	I0505 21:54:35.859690   48764 provision.go:143] copyHostCerts
	I0505 21:54:35.859723   48764 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem
	I0505 21:54:35.859773   48764 exec_runner.go:144] found /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem, removing ...
	I0505 21:54:35.859782   48764 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem
	I0505 21:54:35.859864   48764 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem (1078 bytes)
	I0505 21:54:35.859973   48764 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem
	I0505 21:54:35.859998   48764 exec_runner.go:144] found /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem, removing ...
	I0505 21:54:35.860008   48764 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem
	I0505 21:54:35.860046   48764 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem (1123 bytes)
	I0505 21:54:35.860114   48764 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem
	I0505 21:54:35.860138   48764 exec_runner.go:144] found /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem, removing ...
	I0505 21:54:35.860147   48764 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem
	I0505 21:54:35.860181   48764 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem (1675 bytes)
	I0505 21:54:35.860243   48764 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem org=jenkins.multinode-019621 san=[127.0.0.1 192.168.39.30 localhost minikube multinode-019621]
	I0505 21:54:35.938701   48764 provision.go:177] copyRemoteCerts
	I0505 21:54:35.938755   48764 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0505 21:54:35.938777   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHHostname
	I0505 21:54:35.941747   48764 main.go:141] libmachine: (multinode-019621) DBG | domain multinode-019621 has defined MAC address 52:54:00:ac:32:e4 in network mk-multinode-019621
	I0505 21:54:35.942069   48764 main.go:141] libmachine: (multinode-019621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:32:e4", ip: ""} in network mk-multinode-019621: {Iface:virbr1 ExpiryTime:2024-05-05 22:49:31 +0000 UTC Type:0 Mac:52:54:00:ac:32:e4 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:multinode-019621 Clientid:01:52:54:00:ac:32:e4}
	I0505 21:54:35.942097   48764 main.go:141] libmachine: (multinode-019621) DBG | domain multinode-019621 has defined IP address 192.168.39.30 and MAC address 52:54:00:ac:32:e4 in network mk-multinode-019621
	I0505 21:54:35.942302   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHPort
	I0505 21:54:35.942491   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHKeyPath
	I0505 21:54:35.942622   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHUsername
	I0505 21:54:35.942740   48764 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/multinode-019621/id_rsa Username:docker}
	I0505 21:54:36.031277   48764 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0505 21:54:36.031345   48764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0505 21:54:36.061323   48764 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0505 21:54:36.061382   48764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0505 21:54:36.090462   48764 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0505 21:54:36.090522   48764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0505 21:54:36.118349   48764 provision.go:87] duration metric: took 265.274749ms to configureAuth
	I0505 21:54:36.118376   48764 buildroot.go:189] setting minikube options for container-runtime
	I0505 21:54:36.118625   48764 config.go:182] Loaded profile config "multinode-019621": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 21:54:36.118718   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHHostname
	I0505 21:54:36.121585   48764 main.go:141] libmachine: (multinode-019621) DBG | domain multinode-019621 has defined MAC address 52:54:00:ac:32:e4 in network mk-multinode-019621
	I0505 21:54:36.121941   48764 main.go:141] libmachine: (multinode-019621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:32:e4", ip: ""} in network mk-multinode-019621: {Iface:virbr1 ExpiryTime:2024-05-05 22:49:31 +0000 UTC Type:0 Mac:52:54:00:ac:32:e4 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:multinode-019621 Clientid:01:52:54:00:ac:32:e4}
	I0505 21:54:36.121961   48764 main.go:141] libmachine: (multinode-019621) DBG | domain multinode-019621 has defined IP address 192.168.39.30 and MAC address 52:54:00:ac:32:e4 in network mk-multinode-019621
	I0505 21:54:36.122176   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHPort
	I0505 21:54:36.122380   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHKeyPath
	I0505 21:54:36.122560   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHKeyPath
	I0505 21:54:36.122670   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHUsername
	I0505 21:54:36.122827   48764 main.go:141] libmachine: Using SSH client type: native
	I0505 21:54:36.123006   48764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.30 22 <nil> <nil>}
	I0505 21:54:36.123026   48764 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0505 21:56:07.102795   48764 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0505 21:56:07.102823   48764 machine.go:97] duration metric: took 1m31.622428206s to provisionDockerMachine
	I0505 21:56:07.102836   48764 start.go:293] postStartSetup for "multinode-019621" (driver="kvm2")
	I0505 21:56:07.102847   48764 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0505 21:56:07.102867   48764 main.go:141] libmachine: (multinode-019621) Calling .DriverName
	I0505 21:56:07.103218   48764 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0505 21:56:07.103242   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHHostname
	I0505 21:56:07.106285   48764 main.go:141] libmachine: (multinode-019621) DBG | domain multinode-019621 has defined MAC address 52:54:00:ac:32:e4 in network mk-multinode-019621
	I0505 21:56:07.106702   48764 main.go:141] libmachine: (multinode-019621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:32:e4", ip: ""} in network mk-multinode-019621: {Iface:virbr1 ExpiryTime:2024-05-05 22:49:31 +0000 UTC Type:0 Mac:52:54:00:ac:32:e4 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:multinode-019621 Clientid:01:52:54:00:ac:32:e4}
	I0505 21:56:07.106726   48764 main.go:141] libmachine: (multinode-019621) DBG | domain multinode-019621 has defined IP address 192.168.39.30 and MAC address 52:54:00:ac:32:e4 in network mk-multinode-019621
	I0505 21:56:07.106885   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHPort
	I0505 21:56:07.107055   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHKeyPath
	I0505 21:56:07.107241   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHUsername
	I0505 21:56:07.107410   48764 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/multinode-019621/id_rsa Username:docker}
	I0505 21:56:07.196417   48764 ssh_runner.go:195] Run: cat /etc/os-release
	I0505 21:56:07.201193   48764 command_runner.go:130] > NAME=Buildroot
	I0505 21:56:07.201207   48764 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0505 21:56:07.201211   48764 command_runner.go:130] > ID=buildroot
	I0505 21:56:07.201215   48764 command_runner.go:130] > VERSION_ID=2023.02.9
	I0505 21:56:07.201220   48764 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0505 21:56:07.201383   48764 info.go:137] Remote host: Buildroot 2023.02.9
	I0505 21:56:07.201407   48764 filesync.go:126] Scanning /home/jenkins/minikube-integration/18602-11466/.minikube/addons for local assets ...
	I0505 21:56:07.201478   48764 filesync.go:126] Scanning /home/jenkins/minikube-integration/18602-11466/.minikube/files for local assets ...
	I0505 21:56:07.201556   48764 filesync.go:149] local asset: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem -> 187982.pem in /etc/ssl/certs
	I0505 21:56:07.201565   48764 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem -> /etc/ssl/certs/187982.pem
	I0505 21:56:07.201645   48764 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0505 21:56:07.211276   48764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem --> /etc/ssl/certs/187982.pem (1708 bytes)
	I0505 21:56:07.237857   48764 start.go:296] duration metric: took 135.006162ms for postStartSetup
	I0505 21:56:07.237900   48764 fix.go:56] duration metric: took 1m31.779950719s for fixHost
	I0505 21:56:07.237921   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHHostname
	I0505 21:56:07.240675   48764 main.go:141] libmachine: (multinode-019621) DBG | domain multinode-019621 has defined MAC address 52:54:00:ac:32:e4 in network mk-multinode-019621
	I0505 21:56:07.241067   48764 main.go:141] libmachine: (multinode-019621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:32:e4", ip: ""} in network mk-multinode-019621: {Iface:virbr1 ExpiryTime:2024-05-05 22:49:31 +0000 UTC Type:0 Mac:52:54:00:ac:32:e4 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:multinode-019621 Clientid:01:52:54:00:ac:32:e4}
	I0505 21:56:07.241100   48764 main.go:141] libmachine: (multinode-019621) DBG | domain multinode-019621 has defined IP address 192.168.39.30 and MAC address 52:54:00:ac:32:e4 in network mk-multinode-019621
	I0505 21:56:07.241209   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHPort
	I0505 21:56:07.241400   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHKeyPath
	I0505 21:56:07.241562   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHKeyPath
	I0505 21:56:07.241758   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHUsername
	I0505 21:56:07.241924   48764 main.go:141] libmachine: Using SSH client type: native
	I0505 21:56:07.242084   48764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.30 22 <nil> <nil>}
	I0505 21:56:07.242095   48764 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0505 21:56:07.353227   48764 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714946167.327333939
	
	I0505 21:56:07.353259   48764 fix.go:216] guest clock: 1714946167.327333939
	I0505 21:56:07.353266   48764 fix.go:229] Guest: 2024-05-05 21:56:07.327333939 +0000 UTC Remote: 2024-05-05 21:56:07.237905307 +0000 UTC m=+91.920851726 (delta=89.428632ms)
	I0505 21:56:07.353285   48764 fix.go:200] guest clock delta is within tolerance: 89.428632ms
	I0505 21:56:07.353289   48764 start.go:83] releasing machines lock for "multinode-019621", held for 1m31.895357194s
	I0505 21:56:07.353305   48764 main.go:141] libmachine: (multinode-019621) Calling .DriverName
	I0505 21:56:07.353561   48764 main.go:141] libmachine: (multinode-019621) Calling .GetIP
	I0505 21:56:07.356426   48764 main.go:141] libmachine: (multinode-019621) DBG | domain multinode-019621 has defined MAC address 52:54:00:ac:32:e4 in network mk-multinode-019621
	I0505 21:56:07.356757   48764 main.go:141] libmachine: (multinode-019621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:32:e4", ip: ""} in network mk-multinode-019621: {Iface:virbr1 ExpiryTime:2024-05-05 22:49:31 +0000 UTC Type:0 Mac:52:54:00:ac:32:e4 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:multinode-019621 Clientid:01:52:54:00:ac:32:e4}
	I0505 21:56:07.356793   48764 main.go:141] libmachine: (multinode-019621) DBG | domain multinode-019621 has defined IP address 192.168.39.30 and MAC address 52:54:00:ac:32:e4 in network mk-multinode-019621
	I0505 21:56:07.356979   48764 main.go:141] libmachine: (multinode-019621) Calling .DriverName
	I0505 21:56:07.357540   48764 main.go:141] libmachine: (multinode-019621) Calling .DriverName
	I0505 21:56:07.357688   48764 main.go:141] libmachine: (multinode-019621) Calling .DriverName
	I0505 21:56:07.357790   48764 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0505 21:56:07.357827   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHHostname
	I0505 21:56:07.357887   48764 ssh_runner.go:195] Run: cat /version.json
	I0505 21:56:07.357924   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHHostname
	I0505 21:56:07.360512   48764 main.go:141] libmachine: (multinode-019621) DBG | domain multinode-019621 has defined MAC address 52:54:00:ac:32:e4 in network mk-multinode-019621
	I0505 21:56:07.360731   48764 main.go:141] libmachine: (multinode-019621) DBG | domain multinode-019621 has defined MAC address 52:54:00:ac:32:e4 in network mk-multinode-019621
	I0505 21:56:07.360916   48764 main.go:141] libmachine: (multinode-019621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:32:e4", ip: ""} in network mk-multinode-019621: {Iface:virbr1 ExpiryTime:2024-05-05 22:49:31 +0000 UTC Type:0 Mac:52:54:00:ac:32:e4 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:multinode-019621 Clientid:01:52:54:00:ac:32:e4}
	I0505 21:56:07.360943   48764 main.go:141] libmachine: (multinode-019621) DBG | domain multinode-019621 has defined IP address 192.168.39.30 and MAC address 52:54:00:ac:32:e4 in network mk-multinode-019621
	I0505 21:56:07.361066   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHPort
	I0505 21:56:07.361113   48764 main.go:141] libmachine: (multinode-019621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:32:e4", ip: ""} in network mk-multinode-019621: {Iface:virbr1 ExpiryTime:2024-05-05 22:49:31 +0000 UTC Type:0 Mac:52:54:00:ac:32:e4 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:multinode-019621 Clientid:01:52:54:00:ac:32:e4}
	I0505 21:56:07.361141   48764 main.go:141] libmachine: (multinode-019621) DBG | domain multinode-019621 has defined IP address 192.168.39.30 and MAC address 52:54:00:ac:32:e4 in network mk-multinode-019621
	I0505 21:56:07.361237   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHKeyPath
	I0505 21:56:07.361322   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHPort
	I0505 21:56:07.361394   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHUsername
	I0505 21:56:07.361463   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHKeyPath
	I0505 21:56:07.361570   48764 main.go:141] libmachine: (multinode-019621) Calling .GetSSHUsername
	I0505 21:56:07.361638   48764 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/multinode-019621/id_rsa Username:docker}
	I0505 21:56:07.361707   48764 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/multinode-019621/id_rsa Username:docker}
	I0505 21:56:07.475261   48764 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0505 21:56:07.476098   48764 command_runner.go:130] > {"iso_version": "v1.33.0-1714498396-18779", "kicbase_version": "v0.0.43-1714386659-18769", "minikube_version": "v1.33.0", "commit": "0c7995ab2d4914d5c74027eee5f5d102e19316f2"}
	I0505 21:56:07.476259   48764 ssh_runner.go:195] Run: systemctl --version
	I0505 21:56:07.483431   48764 command_runner.go:130] > systemd 252 (252)
	I0505 21:56:07.483475   48764 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0505 21:56:07.483558   48764 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0505 21:56:07.649464   48764 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0505 21:56:07.658749   48764 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0505 21:56:07.659231   48764 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0505 21:56:07.659313   48764 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0505 21:56:07.669806   48764 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0505 21:56:07.669824   48764 start.go:494] detecting cgroup driver to use...
	I0505 21:56:07.669873   48764 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0505 21:56:07.687511   48764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0505 21:56:07.702609   48764 docker.go:217] disabling cri-docker service (if available) ...
	I0505 21:56:07.702734   48764 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0505 21:56:07.717414   48764 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0505 21:56:07.732316   48764 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0505 21:56:07.888503   48764 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0505 21:56:08.037683   48764 docker.go:233] disabling docker service ...
	I0505 21:56:08.037761   48764 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0505 21:56:08.055727   48764 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0505 21:56:08.070319   48764 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0505 21:56:08.220380   48764 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0505 21:56:08.364699   48764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0505 21:56:08.380706   48764 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0505 21:56:08.401379   48764 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0505 21:56:08.401704   48764 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0505 21:56:08.401763   48764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:56:08.414319   48764 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0505 21:56:08.414387   48764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:56:08.426642   48764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:56:08.440247   48764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:56:08.452959   48764 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0505 21:56:08.466082   48764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:56:08.478827   48764 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:56:08.490821   48764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 21:56:08.503542   48764 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0505 21:56:08.514535   48764 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0505 21:56:08.514606   48764 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0505 21:56:08.525470   48764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 21:56:08.671769   48764 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0505 21:56:09.263404   48764 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0505 21:56:09.263491   48764 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0505 21:56:09.268914   48764 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0505 21:56:09.268935   48764 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0505 21:56:09.268942   48764 command_runner.go:130] > Device: 0,22	Inode: 1322        Links: 1
	I0505 21:56:09.268948   48764 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0505 21:56:09.268953   48764 command_runner.go:130] > Access: 2024-05-05 21:56:09.121071108 +0000
	I0505 21:56:09.268963   48764 command_runner.go:130] > Modify: 2024-05-05 21:56:09.121071108 +0000
	I0505 21:56:09.268969   48764 command_runner.go:130] > Change: 2024-05-05 21:56:09.121071108 +0000
	I0505 21:56:09.268972   48764 command_runner.go:130] >  Birth: -
	I0505 21:56:09.269150   48764 start.go:562] Will wait 60s for crictl version
	I0505 21:56:09.269211   48764 ssh_runner.go:195] Run: which crictl
	I0505 21:56:09.273557   48764 command_runner.go:130] > /usr/bin/crictl
	I0505 21:56:09.273688   48764 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0505 21:56:09.313317   48764 command_runner.go:130] > Version:  0.1.0
	I0505 21:56:09.313345   48764 command_runner.go:130] > RuntimeName:  cri-o
	I0505 21:56:09.313351   48764 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0505 21:56:09.313356   48764 command_runner.go:130] > RuntimeApiVersion:  v1
	I0505 21:56:09.314643   48764 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0505 21:56:09.314731   48764 ssh_runner.go:195] Run: crio --version
	I0505 21:56:09.345818   48764 command_runner.go:130] > crio version 1.29.1
	I0505 21:56:09.345842   48764 command_runner.go:130] > Version:        1.29.1
	I0505 21:56:09.345852   48764 command_runner.go:130] > GitCommit:      unknown
	I0505 21:56:09.345858   48764 command_runner.go:130] > GitCommitDate:  unknown
	I0505 21:56:09.345864   48764 command_runner.go:130] > GitTreeState:   clean
	I0505 21:56:09.345874   48764 command_runner.go:130] > BuildDate:      2024-04-30T23:23:49Z
	I0505 21:56:09.345880   48764 command_runner.go:130] > GoVersion:      go1.21.6
	I0505 21:56:09.345886   48764 command_runner.go:130] > Compiler:       gc
	I0505 21:56:09.345893   48764 command_runner.go:130] > Platform:       linux/amd64
	I0505 21:56:09.345900   48764 command_runner.go:130] > Linkmode:       dynamic
	I0505 21:56:09.345907   48764 command_runner.go:130] > BuildTags:      
	I0505 21:56:09.345914   48764 command_runner.go:130] >   containers_image_ostree_stub
	I0505 21:56:09.345921   48764 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0505 21:56:09.345932   48764 command_runner.go:130] >   btrfs_noversion
	I0505 21:56:09.345939   48764 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0505 21:56:09.345949   48764 command_runner.go:130] >   libdm_no_deferred_remove
	I0505 21:56:09.345954   48764 command_runner.go:130] >   seccomp
	I0505 21:56:09.345963   48764 command_runner.go:130] > LDFlags:          unknown
	I0505 21:56:09.345969   48764 command_runner.go:130] > SeccompEnabled:   true
	I0505 21:56:09.345975   48764 command_runner.go:130] > AppArmorEnabled:  false
	I0505 21:56:09.346057   48764 ssh_runner.go:195] Run: crio --version
	I0505 21:56:09.382776   48764 command_runner.go:130] > crio version 1.29.1
	I0505 21:56:09.382815   48764 command_runner.go:130] > Version:        1.29.1
	I0505 21:56:09.382821   48764 command_runner.go:130] > GitCommit:      unknown
	I0505 21:56:09.382825   48764 command_runner.go:130] > GitCommitDate:  unknown
	I0505 21:56:09.382829   48764 command_runner.go:130] > GitTreeState:   clean
	I0505 21:56:09.382835   48764 command_runner.go:130] > BuildDate:      2024-04-30T23:23:49Z
	I0505 21:56:09.382845   48764 command_runner.go:130] > GoVersion:      go1.21.6
	I0505 21:56:09.382849   48764 command_runner.go:130] > Compiler:       gc
	I0505 21:56:09.382854   48764 command_runner.go:130] > Platform:       linux/amd64
	I0505 21:56:09.382859   48764 command_runner.go:130] > Linkmode:       dynamic
	I0505 21:56:09.382869   48764 command_runner.go:130] > BuildTags:      
	I0505 21:56:09.382876   48764 command_runner.go:130] >   containers_image_ostree_stub
	I0505 21:56:09.382883   48764 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0505 21:56:09.382889   48764 command_runner.go:130] >   btrfs_noversion
	I0505 21:56:09.382902   48764 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0505 21:56:09.382909   48764 command_runner.go:130] >   libdm_no_deferred_remove
	I0505 21:56:09.382916   48764 command_runner.go:130] >   seccomp
	I0505 21:56:09.382923   48764 command_runner.go:130] > LDFlags:          unknown
	I0505 21:56:09.382930   48764 command_runner.go:130] > SeccompEnabled:   true
	I0505 21:56:09.382937   48764 command_runner.go:130] > AppArmorEnabled:  false
	I0505 21:56:09.386612   48764 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0505 21:56:09.388198   48764 main.go:141] libmachine: (multinode-019621) Calling .GetIP
	I0505 21:56:09.390959   48764 main.go:141] libmachine: (multinode-019621) DBG | domain multinode-019621 has defined MAC address 52:54:00:ac:32:e4 in network mk-multinode-019621
	I0505 21:56:09.391260   48764 main.go:141] libmachine: (multinode-019621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:32:e4", ip: ""} in network mk-multinode-019621: {Iface:virbr1 ExpiryTime:2024-05-05 22:49:31 +0000 UTC Type:0 Mac:52:54:00:ac:32:e4 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:multinode-019621 Clientid:01:52:54:00:ac:32:e4}
	I0505 21:56:09.391293   48764 main.go:141] libmachine: (multinode-019621) DBG | domain multinode-019621 has defined IP address 192.168.39.30 and MAC address 52:54:00:ac:32:e4 in network mk-multinode-019621
	I0505 21:56:09.391497   48764 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0505 21:56:09.396816   48764 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0505 21:56:09.396985   48764 kubeadm.go:877] updating cluster {Name:multinode-019621 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.0 ClusterName:multinode-019621 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.30 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.242 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.246 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0505 21:56:09.397133   48764 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0505 21:56:09.397171   48764 ssh_runner.go:195] Run: sudo crictl images --output json
	I0505 21:56:09.448515   48764 command_runner.go:130] > {
	I0505 21:56:09.448537   48764 command_runner.go:130] >   "images": [
	I0505 21:56:09.448542   48764 command_runner.go:130] >     {
	I0505 21:56:09.448549   48764 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0505 21:56:09.448554   48764 command_runner.go:130] >       "repoTags": [
	I0505 21:56:09.448560   48764 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0505 21:56:09.448564   48764 command_runner.go:130] >       ],
	I0505 21:56:09.448568   48764 command_runner.go:130] >       "repoDigests": [
	I0505 21:56:09.448576   48764 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0505 21:56:09.448583   48764 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0505 21:56:09.448587   48764 command_runner.go:130] >       ],
	I0505 21:56:09.448595   48764 command_runner.go:130] >       "size": "65291810",
	I0505 21:56:09.448601   48764 command_runner.go:130] >       "uid": null,
	I0505 21:56:09.448606   48764 command_runner.go:130] >       "username": "",
	I0505 21:56:09.448615   48764 command_runner.go:130] >       "spec": null,
	I0505 21:56:09.448620   48764 command_runner.go:130] >       "pinned": false
	I0505 21:56:09.448640   48764 command_runner.go:130] >     },
	I0505 21:56:09.448651   48764 command_runner.go:130] >     {
	I0505 21:56:09.448660   48764 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0505 21:56:09.448664   48764 command_runner.go:130] >       "repoTags": [
	I0505 21:56:09.448670   48764 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0505 21:56:09.448676   48764 command_runner.go:130] >       ],
	I0505 21:56:09.448681   48764 command_runner.go:130] >       "repoDigests": [
	I0505 21:56:09.448692   48764 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0505 21:56:09.448702   48764 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0505 21:56:09.448705   48764 command_runner.go:130] >       ],
	I0505 21:56:09.448710   48764 command_runner.go:130] >       "size": "1363676",
	I0505 21:56:09.448715   48764 command_runner.go:130] >       "uid": null,
	I0505 21:56:09.448728   48764 command_runner.go:130] >       "username": "",
	I0505 21:56:09.448738   48764 command_runner.go:130] >       "spec": null,
	I0505 21:56:09.448745   48764 command_runner.go:130] >       "pinned": false
	I0505 21:56:09.448751   48764 command_runner.go:130] >     },
	I0505 21:56:09.448757   48764 command_runner.go:130] >     {
	I0505 21:56:09.448772   48764 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0505 21:56:09.448781   48764 command_runner.go:130] >       "repoTags": [
	I0505 21:56:09.448789   48764 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0505 21:56:09.448797   48764 command_runner.go:130] >       ],
	I0505 21:56:09.448803   48764 command_runner.go:130] >       "repoDigests": [
	I0505 21:56:09.448823   48764 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0505 21:56:09.448838   48764 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0505 21:56:09.448847   48764 command_runner.go:130] >       ],
	I0505 21:56:09.448853   48764 command_runner.go:130] >       "size": "31470524",
	I0505 21:56:09.448863   48764 command_runner.go:130] >       "uid": null,
	I0505 21:56:09.448869   48764 command_runner.go:130] >       "username": "",
	I0505 21:56:09.448878   48764 command_runner.go:130] >       "spec": null,
	I0505 21:56:09.448884   48764 command_runner.go:130] >       "pinned": false
	I0505 21:56:09.448890   48764 command_runner.go:130] >     },
	I0505 21:56:09.448898   48764 command_runner.go:130] >     {
	I0505 21:56:09.448908   48764 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0505 21:56:09.448917   48764 command_runner.go:130] >       "repoTags": [
	I0505 21:56:09.448925   48764 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0505 21:56:09.448934   48764 command_runner.go:130] >       ],
	I0505 21:56:09.448954   48764 command_runner.go:130] >       "repoDigests": [
	I0505 21:56:09.448967   48764 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0505 21:56:09.448981   48764 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0505 21:56:09.448987   48764 command_runner.go:130] >       ],
	I0505 21:56:09.448992   48764 command_runner.go:130] >       "size": "61245718",
	I0505 21:56:09.448996   48764 command_runner.go:130] >       "uid": null,
	I0505 21:56:09.449000   48764 command_runner.go:130] >       "username": "nonroot",
	I0505 21:56:09.449009   48764 command_runner.go:130] >       "spec": null,
	I0505 21:56:09.449015   48764 command_runner.go:130] >       "pinned": false
	I0505 21:56:09.449023   48764 command_runner.go:130] >     },
	I0505 21:56:09.449029   48764 command_runner.go:130] >     {
	I0505 21:56:09.449042   48764 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0505 21:56:09.449051   48764 command_runner.go:130] >       "repoTags": [
	I0505 21:56:09.449058   48764 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0505 21:56:09.449066   48764 command_runner.go:130] >       ],
	I0505 21:56:09.449072   48764 command_runner.go:130] >       "repoDigests": [
	I0505 21:56:09.449084   48764 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0505 21:56:09.449098   48764 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0505 21:56:09.449105   48764 command_runner.go:130] >       ],
	I0505 21:56:09.449109   48764 command_runner.go:130] >       "size": "150779692",
	I0505 21:56:09.449115   48764 command_runner.go:130] >       "uid": {
	I0505 21:56:09.449119   48764 command_runner.go:130] >         "value": "0"
	I0505 21:56:09.449123   48764 command_runner.go:130] >       },
	I0505 21:56:09.449127   48764 command_runner.go:130] >       "username": "",
	I0505 21:56:09.449131   48764 command_runner.go:130] >       "spec": null,
	I0505 21:56:09.449137   48764 command_runner.go:130] >       "pinned": false
	I0505 21:56:09.449141   48764 command_runner.go:130] >     },
	I0505 21:56:09.449144   48764 command_runner.go:130] >     {
	I0505 21:56:09.449150   48764 command_runner.go:130] >       "id": "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0",
	I0505 21:56:09.449160   48764 command_runner.go:130] >       "repoTags": [
	I0505 21:56:09.449165   48764 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.0"
	I0505 21:56:09.449168   48764 command_runner.go:130] >       ],
	I0505 21:56:09.449172   48764 command_runner.go:130] >       "repoDigests": [
	I0505 21:56:09.449179   48764 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81",
	I0505 21:56:09.449189   48764 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3"
	I0505 21:56:09.449192   48764 command_runner.go:130] >       ],
	I0505 21:56:09.449201   48764 command_runner.go:130] >       "size": "117609952",
	I0505 21:56:09.449208   48764 command_runner.go:130] >       "uid": {
	I0505 21:56:09.449212   48764 command_runner.go:130] >         "value": "0"
	I0505 21:56:09.449218   48764 command_runner.go:130] >       },
	I0505 21:56:09.449222   48764 command_runner.go:130] >       "username": "",
	I0505 21:56:09.449226   48764 command_runner.go:130] >       "spec": null,
	I0505 21:56:09.449230   48764 command_runner.go:130] >       "pinned": false
	I0505 21:56:09.449235   48764 command_runner.go:130] >     },
	I0505 21:56:09.449239   48764 command_runner.go:130] >     {
	I0505 21:56:09.449247   48764 command_runner.go:130] >       "id": "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b",
	I0505 21:56:09.449251   48764 command_runner.go:130] >       "repoTags": [
	I0505 21:56:09.449259   48764 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.0"
	I0505 21:56:09.449265   48764 command_runner.go:130] >       ],
	I0505 21:56:09.449272   48764 command_runner.go:130] >       "repoDigests": [
	I0505 21:56:09.449282   48764 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe",
	I0505 21:56:09.449292   48764 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450"
	I0505 21:56:09.449297   48764 command_runner.go:130] >       ],
	I0505 21:56:09.449302   48764 command_runner.go:130] >       "size": "112170310",
	I0505 21:56:09.449307   48764 command_runner.go:130] >       "uid": {
	I0505 21:56:09.449311   48764 command_runner.go:130] >         "value": "0"
	I0505 21:56:09.449316   48764 command_runner.go:130] >       },
	I0505 21:56:09.449320   48764 command_runner.go:130] >       "username": "",
	I0505 21:56:09.449326   48764 command_runner.go:130] >       "spec": null,
	I0505 21:56:09.449330   48764 command_runner.go:130] >       "pinned": false
	I0505 21:56:09.449333   48764 command_runner.go:130] >     },
	I0505 21:56:09.449340   48764 command_runner.go:130] >     {
	I0505 21:56:09.449346   48764 command_runner.go:130] >       "id": "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b",
	I0505 21:56:09.449352   48764 command_runner.go:130] >       "repoTags": [
	I0505 21:56:09.449357   48764 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.0"
	I0505 21:56:09.449363   48764 command_runner.go:130] >       ],
	I0505 21:56:09.449367   48764 command_runner.go:130] >       "repoDigests": [
	I0505 21:56:09.449389   48764 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68",
	I0505 21:56:09.449398   48764 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210"
	I0505 21:56:09.449404   48764 command_runner.go:130] >       ],
	I0505 21:56:09.449409   48764 command_runner.go:130] >       "size": "85932953",
	I0505 21:56:09.449415   48764 command_runner.go:130] >       "uid": null,
	I0505 21:56:09.449423   48764 command_runner.go:130] >       "username": "",
	I0505 21:56:09.449430   48764 command_runner.go:130] >       "spec": null,
	I0505 21:56:09.449434   48764 command_runner.go:130] >       "pinned": false
	I0505 21:56:09.449437   48764 command_runner.go:130] >     },
	I0505 21:56:09.449440   48764 command_runner.go:130] >     {
	I0505 21:56:09.449445   48764 command_runner.go:130] >       "id": "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced",
	I0505 21:56:09.449449   48764 command_runner.go:130] >       "repoTags": [
	I0505 21:56:09.449453   48764 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.0"
	I0505 21:56:09.449457   48764 command_runner.go:130] >       ],
	I0505 21:56:09.449460   48764 command_runner.go:130] >       "repoDigests": [
	I0505 21:56:09.449467   48764 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67",
	I0505 21:56:09.449476   48764 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a"
	I0505 21:56:09.449481   48764 command_runner.go:130] >       ],
	I0505 21:56:09.449485   48764 command_runner.go:130] >       "size": "63026502",
	I0505 21:56:09.449491   48764 command_runner.go:130] >       "uid": {
	I0505 21:56:09.449495   48764 command_runner.go:130] >         "value": "0"
	I0505 21:56:09.449501   48764 command_runner.go:130] >       },
	I0505 21:56:09.449505   48764 command_runner.go:130] >       "username": "",
	I0505 21:56:09.449511   48764 command_runner.go:130] >       "spec": null,
	I0505 21:56:09.449515   48764 command_runner.go:130] >       "pinned": false
	I0505 21:56:09.449520   48764 command_runner.go:130] >     },
	I0505 21:56:09.449523   48764 command_runner.go:130] >     {
	I0505 21:56:09.449531   48764 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0505 21:56:09.449536   48764 command_runner.go:130] >       "repoTags": [
	I0505 21:56:09.449540   48764 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0505 21:56:09.449546   48764 command_runner.go:130] >       ],
	I0505 21:56:09.449549   48764 command_runner.go:130] >       "repoDigests": [
	I0505 21:56:09.449558   48764 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0505 21:56:09.449567   48764 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0505 21:56:09.449571   48764 command_runner.go:130] >       ],
	I0505 21:56:09.449578   48764 command_runner.go:130] >       "size": "750414",
	I0505 21:56:09.449582   48764 command_runner.go:130] >       "uid": {
	I0505 21:56:09.449588   48764 command_runner.go:130] >         "value": "65535"
	I0505 21:56:09.449591   48764 command_runner.go:130] >       },
	I0505 21:56:09.449597   48764 command_runner.go:130] >       "username": "",
	I0505 21:56:09.449602   48764 command_runner.go:130] >       "spec": null,
	I0505 21:56:09.449613   48764 command_runner.go:130] >       "pinned": true
	I0505 21:56:09.449619   48764 command_runner.go:130] >     }
	I0505 21:56:09.449622   48764 command_runner.go:130] >   ]
	I0505 21:56:09.449628   48764 command_runner.go:130] > }
	I0505 21:56:09.449804   48764 crio.go:514] all images are preloaded for cri-o runtime.
	I0505 21:56:09.449817   48764 crio.go:433] Images already preloaded, skipping extraction
	I0505 21:56:09.449876   48764 ssh_runner.go:195] Run: sudo crictl images --output json
	I0505 21:56:09.487917   48764 command_runner.go:130] > {
	I0505 21:56:09.487943   48764 command_runner.go:130] >   "images": [
	I0505 21:56:09.487950   48764 command_runner.go:130] >     {
	I0505 21:56:09.487962   48764 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0505 21:56:09.487969   48764 command_runner.go:130] >       "repoTags": [
	I0505 21:56:09.487979   48764 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0505 21:56:09.487988   48764 command_runner.go:130] >       ],
	I0505 21:56:09.487995   48764 command_runner.go:130] >       "repoDigests": [
	I0505 21:56:09.488014   48764 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0505 21:56:09.488028   48764 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0505 21:56:09.488046   48764 command_runner.go:130] >       ],
	I0505 21:56:09.488057   48764 command_runner.go:130] >       "size": "65291810",
	I0505 21:56:09.488065   48764 command_runner.go:130] >       "uid": null,
	I0505 21:56:09.488069   48764 command_runner.go:130] >       "username": "",
	I0505 21:56:09.488077   48764 command_runner.go:130] >       "spec": null,
	I0505 21:56:09.488083   48764 command_runner.go:130] >       "pinned": false
	I0505 21:56:09.488087   48764 command_runner.go:130] >     },
	I0505 21:56:09.488090   48764 command_runner.go:130] >     {
	I0505 21:56:09.488097   48764 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0505 21:56:09.488103   48764 command_runner.go:130] >       "repoTags": [
	I0505 21:56:09.488108   48764 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0505 21:56:09.488114   48764 command_runner.go:130] >       ],
	I0505 21:56:09.488119   48764 command_runner.go:130] >       "repoDigests": [
	I0505 21:56:09.488129   48764 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0505 21:56:09.488139   48764 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0505 21:56:09.488145   48764 command_runner.go:130] >       ],
	I0505 21:56:09.488149   48764 command_runner.go:130] >       "size": "1363676",
	I0505 21:56:09.488154   48764 command_runner.go:130] >       "uid": null,
	I0505 21:56:09.488162   48764 command_runner.go:130] >       "username": "",
	I0505 21:56:09.488168   48764 command_runner.go:130] >       "spec": null,
	I0505 21:56:09.488171   48764 command_runner.go:130] >       "pinned": false
	I0505 21:56:09.488174   48764 command_runner.go:130] >     },
	I0505 21:56:09.488180   48764 command_runner.go:130] >     {
	I0505 21:56:09.488186   48764 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0505 21:56:09.488192   48764 command_runner.go:130] >       "repoTags": [
	I0505 21:56:09.488197   48764 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0505 21:56:09.488203   48764 command_runner.go:130] >       ],
	I0505 21:56:09.488208   48764 command_runner.go:130] >       "repoDigests": [
	I0505 21:56:09.488217   48764 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0505 21:56:09.488227   48764 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0505 21:56:09.488233   48764 command_runner.go:130] >       ],
	I0505 21:56:09.488238   48764 command_runner.go:130] >       "size": "31470524",
	I0505 21:56:09.488244   48764 command_runner.go:130] >       "uid": null,
	I0505 21:56:09.488248   48764 command_runner.go:130] >       "username": "",
	I0505 21:56:09.488258   48764 command_runner.go:130] >       "spec": null,
	I0505 21:56:09.488265   48764 command_runner.go:130] >       "pinned": false
	I0505 21:56:09.488268   48764 command_runner.go:130] >     },
	I0505 21:56:09.488272   48764 command_runner.go:130] >     {
	I0505 21:56:09.488278   48764 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0505 21:56:09.488285   48764 command_runner.go:130] >       "repoTags": [
	I0505 21:56:09.488290   48764 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0505 21:56:09.488295   48764 command_runner.go:130] >       ],
	I0505 21:56:09.488300   48764 command_runner.go:130] >       "repoDigests": [
	I0505 21:56:09.488309   48764 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0505 21:56:09.488325   48764 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0505 21:56:09.488331   48764 command_runner.go:130] >       ],
	I0505 21:56:09.488338   48764 command_runner.go:130] >       "size": "61245718",
	I0505 21:56:09.488346   48764 command_runner.go:130] >       "uid": null,
	I0505 21:56:09.488356   48764 command_runner.go:130] >       "username": "nonroot",
	I0505 21:56:09.488366   48764 command_runner.go:130] >       "spec": null,
	I0505 21:56:09.488375   48764 command_runner.go:130] >       "pinned": false
	I0505 21:56:09.488383   48764 command_runner.go:130] >     },
	I0505 21:56:09.488391   48764 command_runner.go:130] >     {
	I0505 21:56:09.488401   48764 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0505 21:56:09.488407   48764 command_runner.go:130] >       "repoTags": [
	I0505 21:56:09.488412   48764 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0505 21:56:09.488418   48764 command_runner.go:130] >       ],
	I0505 21:56:09.488422   48764 command_runner.go:130] >       "repoDigests": [
	I0505 21:56:09.488432   48764 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0505 21:56:09.488441   48764 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0505 21:56:09.488447   48764 command_runner.go:130] >       ],
	I0505 21:56:09.488451   48764 command_runner.go:130] >       "size": "150779692",
	I0505 21:56:09.488456   48764 command_runner.go:130] >       "uid": {
	I0505 21:56:09.488460   48764 command_runner.go:130] >         "value": "0"
	I0505 21:56:09.488467   48764 command_runner.go:130] >       },
	I0505 21:56:09.488471   48764 command_runner.go:130] >       "username": "",
	I0505 21:56:09.488477   48764 command_runner.go:130] >       "spec": null,
	I0505 21:56:09.488481   48764 command_runner.go:130] >       "pinned": false
	I0505 21:56:09.488487   48764 command_runner.go:130] >     },
	I0505 21:56:09.488490   48764 command_runner.go:130] >     {
	I0505 21:56:09.488504   48764 command_runner.go:130] >       "id": "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0",
	I0505 21:56:09.488510   48764 command_runner.go:130] >       "repoTags": [
	I0505 21:56:09.488516   48764 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.0"
	I0505 21:56:09.488521   48764 command_runner.go:130] >       ],
	I0505 21:56:09.488525   48764 command_runner.go:130] >       "repoDigests": [
	I0505 21:56:09.488534   48764 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81",
	I0505 21:56:09.488543   48764 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3"
	I0505 21:56:09.488548   48764 command_runner.go:130] >       ],
	I0505 21:56:09.488552   48764 command_runner.go:130] >       "size": "117609952",
	I0505 21:56:09.488558   48764 command_runner.go:130] >       "uid": {
	I0505 21:56:09.488562   48764 command_runner.go:130] >         "value": "0"
	I0505 21:56:09.488568   48764 command_runner.go:130] >       },
	I0505 21:56:09.488572   48764 command_runner.go:130] >       "username": "",
	I0505 21:56:09.488578   48764 command_runner.go:130] >       "spec": null,
	I0505 21:56:09.488582   48764 command_runner.go:130] >       "pinned": false
	I0505 21:56:09.488585   48764 command_runner.go:130] >     },
	I0505 21:56:09.488591   48764 command_runner.go:130] >     {
	I0505 21:56:09.488597   48764 command_runner.go:130] >       "id": "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b",
	I0505 21:56:09.488603   48764 command_runner.go:130] >       "repoTags": [
	I0505 21:56:09.488608   48764 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.0"
	I0505 21:56:09.488614   48764 command_runner.go:130] >       ],
	I0505 21:56:09.488618   48764 command_runner.go:130] >       "repoDigests": [
	I0505 21:56:09.488628   48764 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe",
	I0505 21:56:09.488638   48764 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450"
	I0505 21:56:09.488650   48764 command_runner.go:130] >       ],
	I0505 21:56:09.488656   48764 command_runner.go:130] >       "size": "112170310",
	I0505 21:56:09.488660   48764 command_runner.go:130] >       "uid": {
	I0505 21:56:09.488664   48764 command_runner.go:130] >         "value": "0"
	I0505 21:56:09.488667   48764 command_runner.go:130] >       },
	I0505 21:56:09.488671   48764 command_runner.go:130] >       "username": "",
	I0505 21:56:09.488678   48764 command_runner.go:130] >       "spec": null,
	I0505 21:56:09.488697   48764 command_runner.go:130] >       "pinned": false
	I0505 21:56:09.488703   48764 command_runner.go:130] >     },
	I0505 21:56:09.488707   48764 command_runner.go:130] >     {
	I0505 21:56:09.488712   48764 command_runner.go:130] >       "id": "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b",
	I0505 21:56:09.488716   48764 command_runner.go:130] >       "repoTags": [
	I0505 21:56:09.488725   48764 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.0"
	I0505 21:56:09.488731   48764 command_runner.go:130] >       ],
	I0505 21:56:09.488735   48764 command_runner.go:130] >       "repoDigests": [
	I0505 21:56:09.488757   48764 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68",
	I0505 21:56:09.488767   48764 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210"
	I0505 21:56:09.488770   48764 command_runner.go:130] >       ],
	I0505 21:56:09.488775   48764 command_runner.go:130] >       "size": "85932953",
	I0505 21:56:09.488781   48764 command_runner.go:130] >       "uid": null,
	I0505 21:56:09.488785   48764 command_runner.go:130] >       "username": "",
	I0505 21:56:09.488791   48764 command_runner.go:130] >       "spec": null,
	I0505 21:56:09.488795   48764 command_runner.go:130] >       "pinned": false
	I0505 21:56:09.488800   48764 command_runner.go:130] >     },
	I0505 21:56:09.488804   48764 command_runner.go:130] >     {
	I0505 21:56:09.488814   48764 command_runner.go:130] >       "id": "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced",
	I0505 21:56:09.488824   48764 command_runner.go:130] >       "repoTags": [
	I0505 21:56:09.488836   48764 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.0"
	I0505 21:56:09.488843   48764 command_runner.go:130] >       ],
	I0505 21:56:09.488849   48764 command_runner.go:130] >       "repoDigests": [
	I0505 21:56:09.488862   48764 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67",
	I0505 21:56:09.488877   48764 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a"
	I0505 21:56:09.488886   48764 command_runner.go:130] >       ],
	I0505 21:56:09.488895   48764 command_runner.go:130] >       "size": "63026502",
	I0505 21:56:09.488903   48764 command_runner.go:130] >       "uid": {
	I0505 21:56:09.488910   48764 command_runner.go:130] >         "value": "0"
	I0505 21:56:09.488917   48764 command_runner.go:130] >       },
	I0505 21:56:09.488926   48764 command_runner.go:130] >       "username": "",
	I0505 21:56:09.488932   48764 command_runner.go:130] >       "spec": null,
	I0505 21:56:09.488947   48764 command_runner.go:130] >       "pinned": false
	I0505 21:56:09.488956   48764 command_runner.go:130] >     },
	I0505 21:56:09.488964   48764 command_runner.go:130] >     {
	I0505 21:56:09.488976   48764 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0505 21:56:09.488985   48764 command_runner.go:130] >       "repoTags": [
	I0505 21:56:09.488995   48764 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0505 21:56:09.489003   48764 command_runner.go:130] >       ],
	I0505 21:56:09.489012   48764 command_runner.go:130] >       "repoDigests": [
	I0505 21:56:09.489026   48764 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0505 21:56:09.489046   48764 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0505 21:56:09.489055   48764 command_runner.go:130] >       ],
	I0505 21:56:09.489063   48764 command_runner.go:130] >       "size": "750414",
	I0505 21:56:09.489069   48764 command_runner.go:130] >       "uid": {
	I0505 21:56:09.489077   48764 command_runner.go:130] >         "value": "65535"
	I0505 21:56:09.489086   48764 command_runner.go:130] >       },
	I0505 21:56:09.489093   48764 command_runner.go:130] >       "username": "",
	I0505 21:56:09.489101   48764 command_runner.go:130] >       "spec": null,
	I0505 21:56:09.489109   48764 command_runner.go:130] >       "pinned": true
	I0505 21:56:09.489112   48764 command_runner.go:130] >     }
	I0505 21:56:09.489117   48764 command_runner.go:130] >   ]
	I0505 21:56:09.489120   48764 command_runner.go:130] > }
	I0505 21:56:09.489642   48764 crio.go:514] all images are preloaded for cri-o runtime.
	I0505 21:56:09.489660   48764 cache_images.go:84] Images are preloaded, skipping loading
	I0505 21:56:09.489668   48764 kubeadm.go:928] updating node { 192.168.39.30 8443 v1.30.0 crio true true} ...
	I0505 21:56:09.489775   48764 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-019621 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.30
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-019621 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0505 21:56:09.489844   48764 ssh_runner.go:195] Run: crio config
	I0505 21:56:09.535603   48764 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0505 21:56:09.535627   48764 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0505 21:56:09.535635   48764 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0505 21:56:09.535638   48764 command_runner.go:130] > #
	I0505 21:56:09.535645   48764 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0505 21:56:09.535651   48764 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0505 21:56:09.535659   48764 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0505 21:56:09.535669   48764 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0505 21:56:09.535675   48764 command_runner.go:130] > # reload'.
	I0505 21:56:09.535684   48764 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0505 21:56:09.535694   48764 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0505 21:56:09.535703   48764 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0505 21:56:09.535724   48764 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0505 21:56:09.535729   48764 command_runner.go:130] > [crio]
	I0505 21:56:09.535739   48764 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0505 21:56:09.535747   48764 command_runner.go:130] > # containers images, in this directory.
	I0505 21:56:09.535757   48764 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0505 21:56:09.535773   48764 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0505 21:56:09.536083   48764 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0505 21:56:09.536106   48764 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0505 21:56:09.536398   48764 command_runner.go:130] > # imagestore = ""
	I0505 21:56:09.536415   48764 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0505 21:56:09.536425   48764 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0505 21:56:09.536600   48764 command_runner.go:130] > storage_driver = "overlay"
	I0505 21:56:09.536617   48764 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0505 21:56:09.536626   48764 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0505 21:56:09.536633   48764 command_runner.go:130] > storage_option = [
	I0505 21:56:09.536807   48764 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0505 21:56:09.536953   48764 command_runner.go:130] > ]
	I0505 21:56:09.536969   48764 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0505 21:56:09.536979   48764 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0505 21:56:09.537384   48764 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0505 21:56:09.537399   48764 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0505 21:56:09.537409   48764 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0505 21:56:09.537417   48764 command_runner.go:130] > # always happen on a node reboot
	I0505 21:56:09.537774   48764 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0505 21:56:09.537807   48764 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0505 21:56:09.537823   48764 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0505 21:56:09.537835   48764 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0505 21:56:09.537898   48764 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0505 21:56:09.537916   48764 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0505 21:56:09.537928   48764 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0505 21:56:09.538286   48764 command_runner.go:130] > # internal_wipe = true
	I0505 21:56:09.538304   48764 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0505 21:56:09.538313   48764 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0505 21:56:09.539023   48764 command_runner.go:130] > # internal_repair = false
	I0505 21:56:09.539040   48764 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0505 21:56:09.539050   48764 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0505 21:56:09.539059   48764 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0505 21:56:09.539330   48764 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0505 21:56:09.539346   48764 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0505 21:56:09.539352   48764 command_runner.go:130] > [crio.api]
	I0505 21:56:09.539361   48764 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0505 21:56:09.539370   48764 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0505 21:56:09.539382   48764 command_runner.go:130] > # IP address on which the stream server will listen.
	I0505 21:56:09.539389   48764 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0505 21:56:09.539409   48764 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0505 21:56:09.539418   48764 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0505 21:56:09.539427   48764 command_runner.go:130] > # stream_port = "0"
	I0505 21:56:09.539436   48764 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0505 21:56:09.539446   48764 command_runner.go:130] > # stream_enable_tls = false
	I0505 21:56:09.539457   48764 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0505 21:56:09.539466   48764 command_runner.go:130] > # stream_idle_timeout = ""
	I0505 21:56:09.539476   48764 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0505 21:56:09.539500   48764 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0505 21:56:09.539506   48764 command_runner.go:130] > # minutes.
	I0505 21:56:09.539515   48764 command_runner.go:130] > # stream_tls_cert = ""
	I0505 21:56:09.539524   48764 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0505 21:56:09.539537   48764 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0505 21:56:09.539545   48764 command_runner.go:130] > # stream_tls_key = ""
	I0505 21:56:09.539556   48764 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0505 21:56:09.539566   48764 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0505 21:56:09.539593   48764 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0505 21:56:09.539606   48764 command_runner.go:130] > # stream_tls_ca = ""
	I0505 21:56:09.539617   48764 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0505 21:56:09.539627   48764 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0505 21:56:09.539640   48764 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0505 21:56:09.539658   48764 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0505 21:56:09.539670   48764 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0505 21:56:09.539684   48764 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0505 21:56:09.539698   48764 command_runner.go:130] > [crio.runtime]
	I0505 21:56:09.539711   48764 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0505 21:56:09.539722   48764 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0505 21:56:09.539729   48764 command_runner.go:130] > # "nofile=1024:2048"
	I0505 21:56:09.539741   48764 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0505 21:56:09.539747   48764 command_runner.go:130] > # default_ulimits = [
	I0505 21:56:09.539757   48764 command_runner.go:130] > # ]
	I0505 21:56:09.539766   48764 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0505 21:56:09.539776   48764 command_runner.go:130] > # no_pivot = false
	I0505 21:56:09.539785   48764 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0505 21:56:09.539798   48764 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0505 21:56:09.539809   48764 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0505 21:56:09.539822   48764 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0505 21:56:09.539833   48764 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0505 21:56:09.539848   48764 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0505 21:56:09.539858   48764 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0505 21:56:09.539865   48764 command_runner.go:130] > # Cgroup setting for conmon
	I0505 21:56:09.539879   48764 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0505 21:56:09.539889   48764 command_runner.go:130] > conmon_cgroup = "pod"
	I0505 21:56:09.539899   48764 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0505 21:56:09.539910   48764 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0505 21:56:09.539923   48764 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0505 21:56:09.539929   48764 command_runner.go:130] > conmon_env = [
	I0505 21:56:09.539942   48764 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0505 21:56:09.539948   48764 command_runner.go:130] > ]
	I0505 21:56:09.539953   48764 command_runner.go:130] > # Additional environment variables to set for all the
	I0505 21:56:09.539961   48764 command_runner.go:130] > # containers. These are overridden if set in the
	I0505 21:56:09.539966   48764 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0505 21:56:09.539973   48764 command_runner.go:130] > # default_env = [
	I0505 21:56:09.539976   48764 command_runner.go:130] > # ]
	I0505 21:56:09.539982   48764 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0505 21:56:09.539991   48764 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0505 21:56:09.539995   48764 command_runner.go:130] > # selinux = false
	I0505 21:56:09.540006   48764 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0505 21:56:09.540016   48764 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0505 21:56:09.540021   48764 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0505 21:56:09.540028   48764 command_runner.go:130] > # seccomp_profile = ""
	I0505 21:56:09.540037   48764 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0505 21:56:09.540048   48764 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0505 21:56:09.540057   48764 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0505 21:56:09.540067   48764 command_runner.go:130] > # which might increase security.
	I0505 21:56:09.540078   48764 command_runner.go:130] > # This option is currently deprecated,
	I0505 21:56:09.540090   48764 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0505 21:56:09.540100   48764 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0505 21:56:09.540111   48764 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0505 21:56:09.540128   48764 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0505 21:56:09.540137   48764 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0505 21:56:09.540142   48764 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0505 21:56:09.540153   48764 command_runner.go:130] > # This option supports live configuration reload.
	I0505 21:56:09.540163   48764 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0505 21:56:09.540176   48764 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0505 21:56:09.540186   48764 command_runner.go:130] > # the cgroup blockio controller.
	I0505 21:56:09.540195   48764 command_runner.go:130] > # blockio_config_file = ""
	I0505 21:56:09.540208   48764 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0505 21:56:09.540214   48764 command_runner.go:130] > # blockio parameters.
	I0505 21:56:09.540224   48764 command_runner.go:130] > # blockio_reload = false
	I0505 21:56:09.540236   48764 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0505 21:56:09.540241   48764 command_runner.go:130] > # irqbalance daemon.
	I0505 21:56:09.540253   48764 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0505 21:56:09.540267   48764 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0505 21:56:09.540281   48764 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0505 21:56:09.540294   48764 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0505 21:56:09.540303   48764 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0505 21:56:09.540313   48764 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0505 21:56:09.540324   48764 command_runner.go:130] > # This option supports live configuration reload.
	I0505 21:56:09.540332   48764 command_runner.go:130] > # rdt_config_file = ""
	I0505 21:56:09.540337   48764 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0505 21:56:09.540343   48764 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0505 21:56:09.540372   48764 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0505 21:56:09.540390   48764 command_runner.go:130] > # separate_pull_cgroup = ""
	I0505 21:56:09.540396   48764 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0505 21:56:09.540402   48764 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0505 21:56:09.540406   48764 command_runner.go:130] > # will be added.
	I0505 21:56:09.540410   48764 command_runner.go:130] > # default_capabilities = [
	I0505 21:56:09.540413   48764 command_runner.go:130] > # 	"CHOWN",
	I0505 21:56:09.540417   48764 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0505 21:56:09.540420   48764 command_runner.go:130] > # 	"FSETID",
	I0505 21:56:09.540424   48764 command_runner.go:130] > # 	"FOWNER",
	I0505 21:56:09.540427   48764 command_runner.go:130] > # 	"SETGID",
	I0505 21:56:09.540431   48764 command_runner.go:130] > # 	"SETUID",
	I0505 21:56:09.540435   48764 command_runner.go:130] > # 	"SETPCAP",
	I0505 21:56:09.540439   48764 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0505 21:56:09.540443   48764 command_runner.go:130] > # 	"KILL",
	I0505 21:56:09.540446   48764 command_runner.go:130] > # ]
	I0505 21:56:09.540453   48764 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0505 21:56:09.540462   48764 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0505 21:56:09.540466   48764 command_runner.go:130] > # add_inheritable_capabilities = false
	I0505 21:56:09.540473   48764 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0505 21:56:09.540478   48764 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0505 21:56:09.540484   48764 command_runner.go:130] > default_sysctls = [
	I0505 21:56:09.540489   48764 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0505 21:56:09.540496   48764 command_runner.go:130] > ]
	I0505 21:56:09.540503   48764 command_runner.go:130] > # List of devices on the host that a
	I0505 21:56:09.540516   48764 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0505 21:56:09.540526   48764 command_runner.go:130] > # allowed_devices = [
	I0505 21:56:09.540532   48764 command_runner.go:130] > # 	"/dev/fuse",
	I0505 21:56:09.540541   48764 command_runner.go:130] > # ]
	I0505 21:56:09.540547   48764 command_runner.go:130] > # List of additional devices. specified as
	I0505 21:56:09.540557   48764 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0505 21:56:09.540562   48764 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0505 21:56:09.540570   48764 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0505 21:56:09.540574   48764 command_runner.go:130] > # additional_devices = [
	I0505 21:56:09.540578   48764 command_runner.go:130] > # ]
	I0505 21:56:09.540583   48764 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0505 21:56:09.540586   48764 command_runner.go:130] > # cdi_spec_dirs = [
	I0505 21:56:09.540595   48764 command_runner.go:130] > # 	"/etc/cdi",
	I0505 21:56:09.540601   48764 command_runner.go:130] > # 	"/var/run/cdi",
	I0505 21:56:09.540605   48764 command_runner.go:130] > # ]
	I0505 21:56:09.540611   48764 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0505 21:56:09.540619   48764 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0505 21:56:09.540624   48764 command_runner.go:130] > # Defaults to false.
	I0505 21:56:09.540629   48764 command_runner.go:130] > # device_ownership_from_security_context = false
	I0505 21:56:09.540637   48764 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0505 21:56:09.540643   48764 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0505 21:56:09.540649   48764 command_runner.go:130] > # hooks_dir = [
	I0505 21:56:09.540654   48764 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0505 21:56:09.540660   48764 command_runner.go:130] > # ]
	I0505 21:56:09.540665   48764 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0505 21:56:09.540671   48764 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0505 21:56:09.540678   48764 command_runner.go:130] > # its default mounts from the following two files:
	I0505 21:56:09.540683   48764 command_runner.go:130] > #
	I0505 21:56:09.540703   48764 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0505 21:56:09.540717   48764 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0505 21:56:09.540727   48764 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0505 21:56:09.540730   48764 command_runner.go:130] > #
	I0505 21:56:09.540736   48764 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0505 21:56:09.540744   48764 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0505 21:56:09.540751   48764 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0505 21:56:09.540757   48764 command_runner.go:130] > #      only add mounts it finds in this file.
	I0505 21:56:09.540761   48764 command_runner.go:130] > #
	I0505 21:56:09.540764   48764 command_runner.go:130] > # default_mounts_file = ""
	I0505 21:56:09.540769   48764 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0505 21:56:09.540777   48764 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0505 21:56:09.540780   48764 command_runner.go:130] > pids_limit = 1024
	I0505 21:56:09.540786   48764 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0505 21:56:09.540796   48764 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0505 21:56:09.540809   48764 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0505 21:56:09.540823   48764 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0505 21:56:09.540834   48764 command_runner.go:130] > # log_size_max = -1
	I0505 21:56:09.540845   48764 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0505 21:56:09.540854   48764 command_runner.go:130] > # log_to_journald = false
	I0505 21:56:09.540865   48764 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0505 21:56:09.540873   48764 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0505 21:56:09.540878   48764 command_runner.go:130] > # Path to directory for container attach sockets.
	I0505 21:56:09.540884   48764 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0505 21:56:09.540890   48764 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0505 21:56:09.540896   48764 command_runner.go:130] > # bind_mount_prefix = ""
	I0505 21:56:09.540904   48764 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0505 21:56:09.540914   48764 command_runner.go:130] > # read_only = false
	I0505 21:56:09.540923   48764 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0505 21:56:09.540937   48764 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0505 21:56:09.540950   48764 command_runner.go:130] > # live configuration reload.
	I0505 21:56:09.540960   48764 command_runner.go:130] > # log_level = "info"
	I0505 21:56:09.540969   48764 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0505 21:56:09.540981   48764 command_runner.go:130] > # This option supports live configuration reload.
	I0505 21:56:09.540991   48764 command_runner.go:130] > # log_filter = ""
	I0505 21:56:09.541001   48764 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0505 21:56:09.541014   48764 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0505 21:56:09.541023   48764 command_runner.go:130] > # separated by comma.
	I0505 21:56:09.541045   48764 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0505 21:56:09.541056   48764 command_runner.go:130] > # uid_mappings = ""
	I0505 21:56:09.541066   48764 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0505 21:56:09.541078   48764 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0505 21:56:09.541088   48764 command_runner.go:130] > # separated by comma.
	I0505 21:56:09.541099   48764 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0505 21:56:09.541108   48764 command_runner.go:130] > # gid_mappings = ""
	I0505 21:56:09.541117   48764 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0505 21:56:09.541132   48764 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0505 21:56:09.541145   48764 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0505 21:56:09.541160   48764 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0505 21:56:09.541167   48764 command_runner.go:130] > # minimum_mappable_uid = -1
	I0505 21:56:09.541179   48764 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0505 21:56:09.541191   48764 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0505 21:56:09.541204   48764 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0505 21:56:09.541215   48764 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0505 21:56:09.541228   48764 command_runner.go:130] > # minimum_mappable_gid = -1
	I0505 21:56:09.541240   48764 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0505 21:56:09.541259   48764 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0505 21:56:09.541271   48764 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0505 21:56:09.541280   48764 command_runner.go:130] > # ctr_stop_timeout = 30
	I0505 21:56:09.541289   48764 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0505 21:56:09.541302   48764 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0505 21:56:09.541313   48764 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0505 21:56:09.541323   48764 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0505 21:56:09.541330   48764 command_runner.go:130] > drop_infra_ctr = false
	I0505 21:56:09.541339   48764 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0505 21:56:09.541351   48764 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0505 21:56:09.541363   48764 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0505 21:56:09.541373   48764 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0505 21:56:09.541385   48764 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0505 21:56:09.541398   48764 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0505 21:56:09.541410   48764 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0505 21:56:09.541422   48764 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0505 21:56:09.541432   48764 command_runner.go:130] > # shared_cpuset = ""
	I0505 21:56:09.541442   48764 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0505 21:56:09.541453   48764 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0505 21:56:09.541460   48764 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0505 21:56:09.541481   48764 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0505 21:56:09.541491   48764 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0505 21:56:09.541500   48764 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0505 21:56:09.541513   48764 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0505 21:56:09.541520   48764 command_runner.go:130] > # enable_criu_support = false
	I0505 21:56:09.541532   48764 command_runner.go:130] > # Enable/disable the generation of the container,
	I0505 21:56:09.541542   48764 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0505 21:56:09.541552   48764 command_runner.go:130] > # enable_pod_events = false
	I0505 21:56:09.541562   48764 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0505 21:56:09.541575   48764 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0505 21:56:09.541584   48764 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0505 21:56:09.541594   48764 command_runner.go:130] > # default_runtime = "runc"
	I0505 21:56:09.541602   48764 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0505 21:56:09.541617   48764 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0505 21:56:09.541639   48764 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0505 21:56:09.541649   48764 command_runner.go:130] > # creation as a file is not desired either.
	I0505 21:56:09.541662   48764 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0505 21:56:09.541673   48764 command_runner.go:130] > # the hostname is being managed dynamically.
	I0505 21:56:09.541683   48764 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0505 21:56:09.541688   48764 command_runner.go:130] > # ]
	I0505 21:56:09.541710   48764 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0505 21:56:09.541723   48764 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0505 21:56:09.541735   48764 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0505 21:56:09.541746   48764 command_runner.go:130] > # Each entry in the table should follow the format:
	I0505 21:56:09.541751   48764 command_runner.go:130] > #
	I0505 21:56:09.541760   48764 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0505 21:56:09.541772   48764 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0505 21:56:09.541827   48764 command_runner.go:130] > # runtime_type = "oci"
	I0505 21:56:09.541835   48764 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0505 21:56:09.541840   48764 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0505 21:56:09.541844   48764 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0505 21:56:09.541848   48764 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0505 21:56:09.541852   48764 command_runner.go:130] > # monitor_env = []
	I0505 21:56:09.541856   48764 command_runner.go:130] > # privileged_without_host_devices = false
	I0505 21:56:09.541863   48764 command_runner.go:130] > # allowed_annotations = []
	I0505 21:56:09.541868   48764 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0505 21:56:09.541874   48764 command_runner.go:130] > # Where:
	I0505 21:56:09.541879   48764 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0505 21:56:09.541885   48764 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0505 21:56:09.541893   48764 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0505 21:56:09.541899   48764 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0505 21:56:09.541905   48764 command_runner.go:130] > #   in $PATH.
	I0505 21:56:09.541911   48764 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0505 21:56:09.541916   48764 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0505 21:56:09.541922   48764 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0505 21:56:09.541928   48764 command_runner.go:130] > #   state.
	I0505 21:56:09.541934   48764 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0505 21:56:09.541940   48764 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0505 21:56:09.541946   48764 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0505 21:56:09.541954   48764 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0505 21:56:09.541960   48764 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0505 21:56:09.541968   48764 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0505 21:56:09.541977   48764 command_runner.go:130] > #   The currently recognized values are:
	I0505 21:56:09.541985   48764 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0505 21:56:09.541992   48764 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0505 21:56:09.542000   48764 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0505 21:56:09.542006   48764 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0505 21:56:09.542015   48764 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0505 21:56:09.542021   48764 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0505 21:56:09.542030   48764 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0505 21:56:09.542035   48764 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0505 21:56:09.542043   48764 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0505 21:56:09.542050   48764 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0505 21:56:09.542056   48764 command_runner.go:130] > #   deprecated option "conmon".
	I0505 21:56:09.542063   48764 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0505 21:56:09.542070   48764 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0505 21:56:09.542076   48764 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0505 21:56:09.542083   48764 command_runner.go:130] > #   should be moved to the container's cgroup
	I0505 21:56:09.542089   48764 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0505 21:56:09.542096   48764 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0505 21:56:09.542103   48764 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0505 21:56:09.542110   48764 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0505 21:56:09.542113   48764 command_runner.go:130] > #
	I0505 21:56:09.542118   48764 command_runner.go:130] > # Using the seccomp notifier feature:
	I0505 21:56:09.542122   48764 command_runner.go:130] > #
	I0505 21:56:09.542128   48764 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0505 21:56:09.542135   48764 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0505 21:56:09.542138   48764 command_runner.go:130] > #
	I0505 21:56:09.542143   48764 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0505 21:56:09.542151   48764 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0505 21:56:09.542154   48764 command_runner.go:130] > #
	I0505 21:56:09.542160   48764 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0505 21:56:09.542164   48764 command_runner.go:130] > # feature.
	I0505 21:56:09.542167   48764 command_runner.go:130] > #
	I0505 21:56:09.542173   48764 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0505 21:56:09.542181   48764 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0505 21:56:09.542187   48764 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0505 21:56:09.542195   48764 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0505 21:56:09.542206   48764 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0505 21:56:09.542211   48764 command_runner.go:130] > #
	I0505 21:56:09.542216   48764 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0505 21:56:09.542224   48764 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0505 21:56:09.542228   48764 command_runner.go:130] > #
	I0505 21:56:09.542234   48764 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0505 21:56:09.542244   48764 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0505 21:56:09.542250   48764 command_runner.go:130] > #
	I0505 21:56:09.542255   48764 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0505 21:56:09.542262   48764 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0505 21:56:09.542267   48764 command_runner.go:130] > # limitation.
	I0505 21:56:09.542272   48764 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0505 21:56:09.542277   48764 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0505 21:56:09.542280   48764 command_runner.go:130] > runtime_type = "oci"
	I0505 21:56:09.542284   48764 command_runner.go:130] > runtime_root = "/run/runc"
	I0505 21:56:09.542288   48764 command_runner.go:130] > runtime_config_path = ""
	I0505 21:56:09.542294   48764 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0505 21:56:09.542298   48764 command_runner.go:130] > monitor_cgroup = "pod"
	I0505 21:56:09.542304   48764 command_runner.go:130] > monitor_exec_cgroup = ""
	I0505 21:56:09.542308   48764 command_runner.go:130] > monitor_env = [
	I0505 21:56:09.542316   48764 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0505 21:56:09.542319   48764 command_runner.go:130] > ]
	I0505 21:56:09.542325   48764 command_runner.go:130] > privileged_without_host_devices = false
	I0505 21:56:09.542332   48764 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0505 21:56:09.542339   48764 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0505 21:56:09.542345   48764 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0505 21:56:09.542353   48764 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0505 21:56:09.542360   48764 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0505 21:56:09.542368   48764 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0505 21:56:09.542376   48764 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0505 21:56:09.542386   48764 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0505 21:56:09.542393   48764 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0505 21:56:09.542400   48764 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0505 21:56:09.542406   48764 command_runner.go:130] > # Example:
	I0505 21:56:09.542411   48764 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0505 21:56:09.542418   48764 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0505 21:56:09.542429   48764 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0505 21:56:09.542436   48764 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0505 21:56:09.542440   48764 command_runner.go:130] > # cpuset = 0
	I0505 21:56:09.542446   48764 command_runner.go:130] > # cpushares = "0-1"
	I0505 21:56:09.542449   48764 command_runner.go:130] > # Where:
	I0505 21:56:09.542454   48764 command_runner.go:130] > # The workload name is workload-type.
	I0505 21:56:09.542463   48764 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0505 21:56:09.542468   48764 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0505 21:56:09.542473   48764 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0505 21:56:09.542483   48764 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0505 21:56:09.542489   48764 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0505 21:56:09.542496   48764 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0505 21:56:09.542502   48764 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0505 21:56:09.542508   48764 command_runner.go:130] > # Default value is set to true
	I0505 21:56:09.542512   48764 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0505 21:56:09.542519   48764 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0505 21:56:09.542523   48764 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0505 21:56:09.542530   48764 command_runner.go:130] > # Default value is set to 'false'
	I0505 21:56:09.542534   48764 command_runner.go:130] > # disable_hostport_mapping = false
	I0505 21:56:09.542540   48764 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0505 21:56:09.542545   48764 command_runner.go:130] > #
	I0505 21:56:09.542551   48764 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0505 21:56:09.542559   48764 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0505 21:56:09.542565   48764 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0505 21:56:09.542571   48764 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0505 21:56:09.542576   48764 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0505 21:56:09.542579   48764 command_runner.go:130] > [crio.image]
	I0505 21:56:09.542584   48764 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0505 21:56:09.542588   48764 command_runner.go:130] > # default_transport = "docker://"
	I0505 21:56:09.542594   48764 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0505 21:56:09.542600   48764 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0505 21:56:09.542603   48764 command_runner.go:130] > # global_auth_file = ""
	I0505 21:56:09.542608   48764 command_runner.go:130] > # The image used to instantiate infra containers.
	I0505 21:56:09.542612   48764 command_runner.go:130] > # This option supports live configuration reload.
	I0505 21:56:09.542617   48764 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0505 21:56:09.542622   48764 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0505 21:56:09.542632   48764 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0505 21:56:09.542637   48764 command_runner.go:130] > # This option supports live configuration reload.
	I0505 21:56:09.542640   48764 command_runner.go:130] > # pause_image_auth_file = ""
	I0505 21:56:09.542645   48764 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0505 21:56:09.542651   48764 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0505 21:56:09.542656   48764 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0505 21:56:09.542661   48764 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0505 21:56:09.542665   48764 command_runner.go:130] > # pause_command = "/pause"
	I0505 21:56:09.542671   48764 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0505 21:56:09.542676   48764 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0505 21:56:09.542682   48764 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0505 21:56:09.542687   48764 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0505 21:56:09.542697   48764 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0505 21:56:09.542708   48764 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0505 21:56:09.542714   48764 command_runner.go:130] > # pinned_images = [
	I0505 21:56:09.542718   48764 command_runner.go:130] > # ]
	I0505 21:56:09.542723   48764 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0505 21:56:09.542729   48764 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0505 21:56:09.542736   48764 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0505 21:56:09.542742   48764 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0505 21:56:09.542749   48764 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0505 21:56:09.542753   48764 command_runner.go:130] > # signature_policy = ""
	I0505 21:56:09.542759   48764 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0505 21:56:09.542764   48764 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0505 21:56:09.542773   48764 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0505 21:56:09.542779   48764 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0505 21:56:09.542787   48764 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0505 21:56:09.542792   48764 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0505 21:56:09.542800   48764 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0505 21:56:09.542806   48764 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0505 21:56:09.542812   48764 command_runner.go:130] > # changing them here.
	I0505 21:56:09.542816   48764 command_runner.go:130] > # insecure_registries = [
	I0505 21:56:09.542819   48764 command_runner.go:130] > # ]
	I0505 21:56:09.542825   48764 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0505 21:56:09.542832   48764 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0505 21:56:09.542836   48764 command_runner.go:130] > # image_volumes = "mkdir"
	I0505 21:56:09.542851   48764 command_runner.go:130] > # Temporary directory to use for storing big files
	I0505 21:56:09.542858   48764 command_runner.go:130] > # big_files_temporary_dir = ""
	I0505 21:56:09.542863   48764 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0505 21:56:09.542869   48764 command_runner.go:130] > # CNI plugins.
	I0505 21:56:09.542873   48764 command_runner.go:130] > [crio.network]
	I0505 21:56:09.542880   48764 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0505 21:56:09.542885   48764 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0505 21:56:09.542890   48764 command_runner.go:130] > # cni_default_network = ""
	I0505 21:56:09.542896   48764 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0505 21:56:09.542902   48764 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0505 21:56:09.542907   48764 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0505 21:56:09.542910   48764 command_runner.go:130] > # plugin_dirs = [
	I0505 21:56:09.542916   48764 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0505 21:56:09.542919   48764 command_runner.go:130] > # ]
	I0505 21:56:09.542924   48764 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0505 21:56:09.542930   48764 command_runner.go:130] > [crio.metrics]
	I0505 21:56:09.542935   48764 command_runner.go:130] > # Globally enable or disable metrics support.
	I0505 21:56:09.542939   48764 command_runner.go:130] > enable_metrics = true
	I0505 21:56:09.542948   48764 command_runner.go:130] > # Specify enabled metrics collectors.
	I0505 21:56:09.542955   48764 command_runner.go:130] > # Per default all metrics are enabled.
	I0505 21:56:09.542961   48764 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0505 21:56:09.542969   48764 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0505 21:56:09.542975   48764 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0505 21:56:09.542981   48764 command_runner.go:130] > # metrics_collectors = [
	I0505 21:56:09.542985   48764 command_runner.go:130] > # 	"operations",
	I0505 21:56:09.542989   48764 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0505 21:56:09.542994   48764 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0505 21:56:09.542999   48764 command_runner.go:130] > # 	"operations_errors",
	I0505 21:56:09.543004   48764 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0505 21:56:09.543010   48764 command_runner.go:130] > # 	"image_pulls_by_name",
	I0505 21:56:09.543014   48764 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0505 21:56:09.543020   48764 command_runner.go:130] > # 	"image_pulls_failures",
	I0505 21:56:09.543024   48764 command_runner.go:130] > # 	"image_pulls_successes",
	I0505 21:56:09.543030   48764 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0505 21:56:09.543034   48764 command_runner.go:130] > # 	"image_layer_reuse",
	I0505 21:56:09.543038   48764 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0505 21:56:09.543047   48764 command_runner.go:130] > # 	"containers_oom_total",
	I0505 21:56:09.543053   48764 command_runner.go:130] > # 	"containers_oom",
	I0505 21:56:09.543057   48764 command_runner.go:130] > # 	"processes_defunct",
	I0505 21:56:09.543062   48764 command_runner.go:130] > # 	"operations_total",
	I0505 21:56:09.543067   48764 command_runner.go:130] > # 	"operations_latency_seconds",
	I0505 21:56:09.543074   48764 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0505 21:56:09.543078   48764 command_runner.go:130] > # 	"operations_errors_total",
	I0505 21:56:09.543082   48764 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0505 21:56:09.543086   48764 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0505 21:56:09.543093   48764 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0505 21:56:09.543097   48764 command_runner.go:130] > # 	"image_pulls_success_total",
	I0505 21:56:09.543101   48764 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0505 21:56:09.543107   48764 command_runner.go:130] > # 	"containers_oom_count_total",
	I0505 21:56:09.543112   48764 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0505 21:56:09.543117   48764 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0505 21:56:09.543120   48764 command_runner.go:130] > # ]
	I0505 21:56:09.543127   48764 command_runner.go:130] > # The port on which the metrics server will listen.
	I0505 21:56:09.543131   48764 command_runner.go:130] > # metrics_port = 9090
	I0505 21:56:09.543138   48764 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0505 21:56:09.543141   48764 command_runner.go:130] > # metrics_socket = ""
	I0505 21:56:09.543151   48764 command_runner.go:130] > # The certificate for the secure metrics server.
	I0505 21:56:09.543160   48764 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0505 21:56:09.543166   48764 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0505 21:56:09.543173   48764 command_runner.go:130] > # certificate on any modification event.
	I0505 21:56:09.543177   48764 command_runner.go:130] > # metrics_cert = ""
	I0505 21:56:09.543183   48764 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0505 21:56:09.543188   48764 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0505 21:56:09.543981   48764 command_runner.go:130] > # metrics_key = ""
	I0505 21:56:09.544004   48764 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0505 21:56:09.544010   48764 command_runner.go:130] > [crio.tracing]
	I0505 21:56:09.544019   48764 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0505 21:56:09.544025   48764 command_runner.go:130] > # enable_tracing = false
	I0505 21:56:09.544032   48764 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0505 21:56:09.544040   48764 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0505 21:56:09.544053   48764 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0505 21:56:09.544063   48764 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0505 21:56:09.544079   48764 command_runner.go:130] > # CRI-O NRI configuration.
	I0505 21:56:09.544089   48764 command_runner.go:130] > [crio.nri]
	I0505 21:56:09.544095   48764 command_runner.go:130] > # Globally enable or disable NRI.
	I0505 21:56:09.545138   48764 command_runner.go:130] > # enable_nri = false
	I0505 21:56:09.545148   48764 command_runner.go:130] > # NRI socket to listen on.
	I0505 21:56:09.545153   48764 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0505 21:56:09.545157   48764 command_runner.go:130] > # NRI plugin directory to use.
	I0505 21:56:09.545161   48764 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0505 21:56:09.545166   48764 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0505 21:56:09.545170   48764 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0505 21:56:09.545175   48764 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0505 21:56:09.545180   48764 command_runner.go:130] > # nri_disable_connections = false
	I0505 21:56:09.545186   48764 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0505 21:56:09.545191   48764 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0505 21:56:09.545198   48764 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0505 21:56:09.545203   48764 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0505 21:56:09.545208   48764 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0505 21:56:09.545212   48764 command_runner.go:130] > [crio.stats]
	I0505 21:56:09.545220   48764 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0505 21:56:09.545225   48764 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0505 21:56:09.545229   48764 command_runner.go:130] > # stats_collection_period = 0
	I0505 21:56:09.545642   48764 command_runner.go:130] ! time="2024-05-05 21:56:09.501338966Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0505 21:56:09.545662   48764 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0505 21:56:09.545787   48764 cni.go:84] Creating CNI manager for ""
	I0505 21:56:09.545800   48764 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0505 21:56:09.545809   48764 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0505 21:56:09.545828   48764 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.30 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-019621 NodeName:multinode-019621 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.30"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.30 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0505 21:56:09.545964   48764 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.30
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-019621"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.30
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.30"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0505 21:56:09.546031   48764 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0505 21:56:09.558281   48764 command_runner.go:130] > kubeadm
	I0505 21:56:09.558301   48764 command_runner.go:130] > kubectl
	I0505 21:56:09.558306   48764 command_runner.go:130] > kubelet
	I0505 21:56:09.558372   48764 binaries.go:44] Found k8s binaries, skipping transfer
	I0505 21:56:09.558427   48764 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0505 21:56:09.569885   48764 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0505 21:56:09.588324   48764 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0505 21:56:09.606458   48764 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0505 21:56:09.624553   48764 ssh_runner.go:195] Run: grep 192.168.39.30	control-plane.minikube.internal$ /etc/hosts
	I0505 21:56:09.628741   48764 command_runner.go:130] > 192.168.39.30	control-plane.minikube.internal
	I0505 21:56:09.628792   48764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 21:56:09.770333   48764 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0505 21:56:09.788875   48764 certs.go:68] Setting up /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/multinode-019621 for IP: 192.168.39.30
	I0505 21:56:09.788902   48764 certs.go:194] generating shared ca certs ...
	I0505 21:56:09.788922   48764 certs.go:226] acquiring lock for ca certs: {Name:mk9a8f97724855697074b08194c6247f8f9e5c84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 21:56:09.789107   48764 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key
	I0505 21:56:09.789172   48764 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key
	I0505 21:56:09.789185   48764 certs.go:256] generating profile certs ...
	I0505 21:56:09.789291   48764 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/multinode-019621/client.key
	I0505 21:56:09.789377   48764 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/multinode-019621/apiserver.key.2eb61cd2
	I0505 21:56:09.789432   48764 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/multinode-019621/proxy-client.key
	I0505 21:56:09.789445   48764 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0505 21:56:09.789461   48764 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0505 21:56:09.789477   48764 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0505 21:56:09.789489   48764 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0505 21:56:09.789501   48764 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/multinode-019621/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0505 21:56:09.789513   48764 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/multinode-019621/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0505 21:56:09.789525   48764 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/multinode-019621/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0505 21:56:09.789542   48764 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/multinode-019621/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0505 21:56:09.789593   48764 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798.pem (1338 bytes)
	W0505 21:56:09.789622   48764 certs.go:480] ignoring /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798_empty.pem, impossibly tiny 0 bytes
	I0505 21:56:09.789632   48764 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem (1675 bytes)
	I0505 21:56:09.789654   48764 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem (1078 bytes)
	I0505 21:56:09.789686   48764 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem (1123 bytes)
	I0505 21:56:09.789709   48764 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem (1675 bytes)
	I0505 21:56:09.789753   48764 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem (1708 bytes)
	I0505 21:56:09.789787   48764 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem -> /usr/share/ca-certificates/187982.pem
	I0505 21:56:09.789798   48764 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0505 21:56:09.789822   48764 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798.pem -> /usr/share/ca-certificates/18798.pem
	I0505 21:56:09.790443   48764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0505 21:56:09.817370   48764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0505 21:56:09.842975   48764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0505 21:56:09.868984   48764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0505 21:56:09.895031   48764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/multinode-019621/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0505 21:56:09.921744   48764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/multinode-019621/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0505 21:56:09.949042   48764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/multinode-019621/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0505 21:56:09.976965   48764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/multinode-019621/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0505 21:56:10.003808   48764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem --> /usr/share/ca-certificates/187982.pem (1708 bytes)
	I0505 21:56:10.029460   48764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0505 21:56:10.056338   48764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798.pem --> /usr/share/ca-certificates/18798.pem (1338 bytes)
	I0505 21:56:10.082631   48764 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0505 21:56:10.101057   48764 ssh_runner.go:195] Run: openssl version
	I0505 21:56:10.107338   48764 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0505 21:56:10.107404   48764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/187982.pem && ln -fs /usr/share/ca-certificates/187982.pem /etc/ssl/certs/187982.pem"
	I0505 21:56:10.119456   48764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/187982.pem
	I0505 21:56:10.124275   48764 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May  5 21:11 /usr/share/ca-certificates/187982.pem
	I0505 21:56:10.124553   48764 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  5 21:11 /usr/share/ca-certificates/187982.pem
	I0505 21:56:10.124601   48764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/187982.pem
	I0505 21:56:10.130568   48764 command_runner.go:130] > 3ec20f2e
	I0505 21:56:10.130759   48764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/187982.pem /etc/ssl/certs/3ec20f2e.0"
	I0505 21:56:10.141618   48764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0505 21:56:10.154479   48764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0505 21:56:10.159344   48764 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May  5 20:58 /usr/share/ca-certificates/minikubeCA.pem
	I0505 21:56:10.159495   48764 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  5 20:58 /usr/share/ca-certificates/minikubeCA.pem
	I0505 21:56:10.159543   48764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0505 21:56:10.165489   48764 command_runner.go:130] > b5213941
	I0505 21:56:10.165636   48764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0505 21:56:10.176651   48764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18798.pem && ln -fs /usr/share/ca-certificates/18798.pem /etc/ssl/certs/18798.pem"
	I0505 21:56:10.189570   48764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18798.pem
	I0505 21:56:10.194672   48764 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May  5 21:11 /usr/share/ca-certificates/18798.pem
	I0505 21:56:10.194782   48764 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  5 21:11 /usr/share/ca-certificates/18798.pem
	I0505 21:56:10.194836   48764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18798.pem
	I0505 21:56:10.201288   48764 command_runner.go:130] > 51391683
	I0505 21:56:10.201333   48764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18798.pem /etc/ssl/certs/51391683.0"
	I0505 21:56:10.212405   48764 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0505 21:56:10.217480   48764 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0505 21:56:10.217497   48764 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0505 21:56:10.217503   48764 command_runner.go:130] > Device: 253,1	Inode: 533782      Links: 1
	I0505 21:56:10.217509   48764 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0505 21:56:10.217521   48764 command_runner.go:130] > Access: 2024-05-05 21:49:49.082649477 +0000
	I0505 21:56:10.217526   48764 command_runner.go:130] > Modify: 2024-05-05 21:49:49.082649477 +0000
	I0505 21:56:10.217532   48764 command_runner.go:130] > Change: 2024-05-05 21:49:49.082649477 +0000
	I0505 21:56:10.217538   48764 command_runner.go:130] >  Birth: 2024-05-05 21:49:49.082649477 +0000
	I0505 21:56:10.217575   48764 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0505 21:56:10.223473   48764 command_runner.go:130] > Certificate will not expire
	I0505 21:56:10.223540   48764 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0505 21:56:10.229300   48764 command_runner.go:130] > Certificate will not expire
	I0505 21:56:10.229478   48764 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0505 21:56:10.235376   48764 command_runner.go:130] > Certificate will not expire
	I0505 21:56:10.235432   48764 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0505 21:56:10.241113   48764 command_runner.go:130] > Certificate will not expire
	I0505 21:56:10.241272   48764 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0505 21:56:10.247291   48764 command_runner.go:130] > Certificate will not expire
	I0505 21:56:10.247336   48764 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0505 21:56:10.253541   48764 command_runner.go:130] > Certificate will not expire
	I0505 21:56:10.253603   48764 kubeadm.go:391] StartCluster: {Name:multinode-019621 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
0 ClusterName:multinode-019621 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.30 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.242 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.246 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 21:56:10.253711   48764 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0505 21:56:10.253762   48764 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0505 21:56:10.294445   48764 command_runner.go:130] > 848a28f73e60cfa07ad217666af7c053f5c8bede9e3ee3935d24e4522fa3ecc0
	I0505 21:56:10.294468   48764 command_runner.go:130] > b21f2ab80afb53329996f80118878f1af7b09f155fee5dc009087a64169de32d
	I0505 21:56:10.294477   48764 command_runner.go:130] > 43ae3bcd41585aec8645b9074511ba7e2c6162386f12f31e554687b448fe2ad2
	I0505 21:56:10.294582   48764 command_runner.go:130] > 2014ff87bd1ebe119fdb57654420ef50b567e385a498d454505b7bd8e029c60b
	I0505 21:56:10.294602   48764 command_runner.go:130] > 5cd2dc1892eb7cbb28aef581e7c5692c56ba211f18f9af26dc1fe0d0dd8402dc
	I0505 21:56:10.294608   48764 command_runner.go:130] > b1b5f166a5cf3df5211e9fab19839dd1f65f15d8402aa2dfe1f66c30164d4eed
	I0505 21:56:10.294613   48764 command_runner.go:130] > f0e5121525f07b44bfc271cc8fb255bbd6cb4d46c6a240e2acbb2ef8f79411e9
	I0505 21:56:10.294632   48764 command_runner.go:130] > e409273ba65ef4d25df83acbee19bccd69eb814951a8daf5f0bfee075503ac12
	I0505 21:56:10.296255   48764 cri.go:89] found id: "848a28f73e60cfa07ad217666af7c053f5c8bede9e3ee3935d24e4522fa3ecc0"
	I0505 21:56:10.296273   48764 cri.go:89] found id: "b21f2ab80afb53329996f80118878f1af7b09f155fee5dc009087a64169de32d"
	I0505 21:56:10.296277   48764 cri.go:89] found id: "43ae3bcd41585aec8645b9074511ba7e2c6162386f12f31e554687b448fe2ad2"
	I0505 21:56:10.296280   48764 cri.go:89] found id: "2014ff87bd1ebe119fdb57654420ef50b567e385a498d454505b7bd8e029c60b"
	I0505 21:56:10.296283   48764 cri.go:89] found id: "5cd2dc1892eb7cbb28aef581e7c5692c56ba211f18f9af26dc1fe0d0dd8402dc"
	I0505 21:56:10.296295   48764 cri.go:89] found id: "b1b5f166a5cf3df5211e9fab19839dd1f65f15d8402aa2dfe1f66c30164d4eed"
	I0505 21:56:10.296300   48764 cri.go:89] found id: "f0e5121525f07b44bfc271cc8fb255bbd6cb4d46c6a240e2acbb2ef8f79411e9"
	I0505 21:56:10.296302   48764 cri.go:89] found id: "e409273ba65ef4d25df83acbee19bccd69eb814951a8daf5f0bfee075503ac12"
	I0505 21:56:10.296305   48764 cri.go:89] found id: ""
	I0505 21:56:10.296341   48764 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	May 05 22:00:08 multinode-019621 crio[2853]: time="2024-05-05 22:00:08.491315478Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714946408491293835,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=09cb8361-c8ba-4598-a363-832d0ec9926b name=/runtime.v1.ImageService/ImageFsInfo
	May 05 22:00:08 multinode-019621 crio[2853]: time="2024-05-05 22:00:08.491885132Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b7bd8aee-c156-4641-9562-f75602fbef5c name=/runtime.v1.RuntimeService/ListContainers
	May 05 22:00:08 multinode-019621 crio[2853]: time="2024-05-05 22:00:08.491935284Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b7bd8aee-c156-4641-9562-f75602fbef5c name=/runtime.v1.RuntimeService/ListContainers
	May 05 22:00:08 multinode-019621 crio[2853]: time="2024-05-05 22:00:08.492615301Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f38ee383fdf32ce314e35433b654e830e7250cea2d508b078521c284bc60924f,PodSandboxId:631e97f79659b007d136917a9d2f46140e23ecf80cb6f4f0ab2ad4a97d3db047,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714946210455842191,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cl7hp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fc46928d-642e-418f-9db6-c496cedab268,},Annotations:map[string]string{io.kubernetes.container.hash: 1fd86ba1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1c57f4a374d74eb1448cb10d635af2105a36b5f1e974454c4460ad7319c5b5b,PodSandboxId:e494359e7189b7ff5c60a0a8a37c1739ca008f20e7601d83c1c236085703e846,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714946176946596272,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kbqkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2119b45-a792-4860-8906-6d4b422fa032,},Annotations:map[string]string{io.kubernetes.container.hash: 7fa514b1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ca286dc16d8881c75da53df18d0329888f64922bf73d2db385591f8ce0a85b7,PodSandboxId:2585b130ba2ab56b797cd8b935b0e57fc8445015582d0dce2fa8531d2e9b53f6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714946176832636523,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h7tbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11e1eeff-5441-44cd-94b0-b4a8b7773170,},Annotations:map[string]string{io.kubernetes.container.hash: 9d6250a4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88a7ed5f5366d4904c962d6a5989c8bafc8e05b1e16b34cde7a98a2134a8a5f6,PodSandboxId:cacf3ad15dd83fd41913544962727b985e406c539993518f94654225e72dad6c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714946176756685413,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cpdww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cd95fb4-395f-40b0-ac69-985877734928,},Annotations:map[string]
string{io.kubernetes.container.hash: 3601c664,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ea3da7bad03d86b9ef31bb9735cec11ef284df54d515a6f6906c24b8e4311c7,PodSandboxId:cbc5b8b78dcd217d1a72bf21ed4500dc59b38089cd9ad7ddd1b862c40bc24be4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714946176682655527,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec4a14d9-48ae-4f7b-9f1c-2d17443a9abe,},Annotations:map[string]string{io.ku
bernetes.container.hash: c0a01775,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03073d2772bd2a7f14478d4361a75d75e3f0fbcfe076e864d01d053b846bea19,PodSandboxId:16c19bc428099ffb84d9f8635ed9f6281c0122750bd94de6de1570e5460c31a8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714946173024867485,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbd856791294522fc51ff5b62e7cd54b,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fef37118d160fb7416289be0385bebb9522627a84ea0821a6e1a5fe5da2c813,PodSandboxId:5426c6040d9cb314d8240d3313c84587611dde745ceb6fde3b31462003ef9f42,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714946173001893254,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2e468251ab344e0f881c437c1f0a903,},Annotations:map[string]string{io.kubernetes.container.hash: 41c7532d,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08a22997b781ac30b4bd928c16f4a5e38f5337b76ca9899db71f29d8edfd2a0c,PodSandboxId:62f801040656ab5246f61c3ef5e73d3d6db78b22ea1898643e348aba7107e64d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714946172984520067,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf972160d8d32eacd1f5a47e70108580,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0156d27216fa4c488bc142152c48c26b8fe0f7dd51cd40d07ab7b86679487f2d,PodSandboxId:39d26eeac3507e62e15ba216a92d6758a717a483486c9e194fca690fbac95a5f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714946172880117774,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f6a79907fce204a362a8fa77bf50a9,},Annotations:map[string]string{io.kubernetes.container.hash: e5f44add,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5da8dc883b84b13a00d7ddd7caced9b04f1b4fd587ba4727f9ea222bbdc7448c,PodSandboxId:a6044c7de56cf44e67d7336bdc0191bc14846e84cce141a5a0395377b69ff60f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714945864848857026,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cl7hp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fc46928d-642e-418f-9db6-c496cedab268,},Annotations:map[string]string{io.kubernetes.container.hash: 1fd86ba1,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:848a28f73e60cfa07ad217666af7c053f5c8bede9e3ee3935d24e4522fa3ecc0,PodSandboxId:fee2569ec6aa75279f17fcb1566e3fe2781d53455fef40438622e00187a8fd77,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714945815673792748,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec4a14d9-48ae-4f7b-9f1c-2d17443a9abe,},Annotations:map[string]string{io.kubernetes.container.hash: c0a01775,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b21f2ab80afb53329996f80118878f1af7b09f155fee5dc009087a64169de32d,PodSandboxId:5fda04970b0cc64a8463850f5ee2636112964bc5ec454aa8de47571bf93d708b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714945814455833158,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h7tbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11e1eeff-5441-44cd-94b0-b4a8b7773170,},Annotations:map[string]string{io.kubernetes.container.hash: 9d6250a4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43ae3bcd41585aec8645b9074511ba7e2c6162386f12f31e554687b448fe2ad2,PodSandboxId:ef9db81f865f272643499a8e2fcac3272fadfac4ca783a2a3aa364192748e609,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714945812603553395,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kbqkb,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: e2119b45-a792-4860-8906-6d4b422fa032,},Annotations:map[string]string{io.kubernetes.container.hash: 7fa514b1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2014ff87bd1ebe119fdb57654420ef50b567e385a498d454505b7bd8e029c60b,PodSandboxId:0fa7490de9d77ae139e7eed838891222f792885e55a3ccc27b94b7285801ea50,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714945812248205273,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cpdww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cd95fb4-395f-40b0-ac69
-985877734928,},Annotations:map[string]string{io.kubernetes.container.hash: 3601c664,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1b5f166a5cf3df5211e9fab19839dd1f65f15d8402aa2dfe1f66c30164d4eed,PodSandboxId:af74c44c9922dd31f1a88db886d1cdae1b1b7e660b9b70f84e705f12e9ec515c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714945793129600792,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbd856791294522fc51ff5b62e7cd54b,}
,Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cd2dc1892eb7cbb28aef581e7c5692c56ba211f18f9af26dc1fe0d0dd8402dc,PodSandboxId:adf73ed15a3d661567c1445806a1d1df39751b1ae5a0a9f2f91a62e6e199221d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714945793156946994,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2e468251ab344e0f881c437c1f0a903,},Annotations:map[string]string{io.kubernetes.
container.hash: 41c7532d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0e5121525f07b44bfc271cc8fb255bbd6cb4d46c6a240e2acbb2ef8f79411e9,PodSandboxId:812d6350fe96a1738c16f9474b8987ce4310622dcd7e9de22f99483c4601fa24,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714945793051911024,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f6a79907fce204a362a8fa77bf50a9,},Annotations:map[string]string{io.kubernetes.container.hash:
e5f44add,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e409273ba65ef4d25df83acbee19bccd69eb814951a8daf5f0bfee075503ac12,PodSandboxId:40300c2b4d51c791fa276ce6a11f3e9a063b13d10c325dd0dbd1d3c4926fc92c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714945792995840064,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf972160d8d32eacd1f5a47e70108580,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b7bd8aee-c156-4641-9562-f75602fbef5c name=/runtime.v1.RuntimeService/ListContainers
	May 05 22:00:08 multinode-019621 crio[2853]: time="2024-05-05 22:00:08.539688137Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3c07d933-d310-4790-a79d-e1d2ba2871f1 name=/runtime.v1.RuntimeService/Version
	May 05 22:00:08 multinode-019621 crio[2853]: time="2024-05-05 22:00:08.539767093Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3c07d933-d310-4790-a79d-e1d2ba2871f1 name=/runtime.v1.RuntimeService/Version
	May 05 22:00:08 multinode-019621 crio[2853]: time="2024-05-05 22:00:08.541329247Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=20aacc5a-07d6-4596-a3cb-7ede583571f8 name=/runtime.v1.ImageService/ImageFsInfo
	May 05 22:00:08 multinode-019621 crio[2853]: time="2024-05-05 22:00:08.541950575Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714946408541919724,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=20aacc5a-07d6-4596-a3cb-7ede583571f8 name=/runtime.v1.ImageService/ImageFsInfo
	May 05 22:00:08 multinode-019621 crio[2853]: time="2024-05-05 22:00:08.542598613Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a7a5a949-dc98-4a52-bc6b-fd9fa9a03bee name=/runtime.v1.RuntimeService/ListContainers
	May 05 22:00:08 multinode-019621 crio[2853]: time="2024-05-05 22:00:08.542677273Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a7a5a949-dc98-4a52-bc6b-fd9fa9a03bee name=/runtime.v1.RuntimeService/ListContainers
	May 05 22:00:08 multinode-019621 crio[2853]: time="2024-05-05 22:00:08.543180342Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f38ee383fdf32ce314e35433b654e830e7250cea2d508b078521c284bc60924f,PodSandboxId:631e97f79659b007d136917a9d2f46140e23ecf80cb6f4f0ab2ad4a97d3db047,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714946210455842191,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cl7hp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fc46928d-642e-418f-9db6-c496cedab268,},Annotations:map[string]string{io.kubernetes.container.hash: 1fd86ba1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1c57f4a374d74eb1448cb10d635af2105a36b5f1e974454c4460ad7319c5b5b,PodSandboxId:e494359e7189b7ff5c60a0a8a37c1739ca008f20e7601d83c1c236085703e846,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714946176946596272,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kbqkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2119b45-a792-4860-8906-6d4b422fa032,},Annotations:map[string]string{io.kubernetes.container.hash: 7fa514b1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ca286dc16d8881c75da53df18d0329888f64922bf73d2db385591f8ce0a85b7,PodSandboxId:2585b130ba2ab56b797cd8b935b0e57fc8445015582d0dce2fa8531d2e9b53f6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714946176832636523,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h7tbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11e1eeff-5441-44cd-94b0-b4a8b7773170,},Annotations:map[string]string{io.kubernetes.container.hash: 9d6250a4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88a7ed5f5366d4904c962d6a5989c8bafc8e05b1e16b34cde7a98a2134a8a5f6,PodSandboxId:cacf3ad15dd83fd41913544962727b985e406c539993518f94654225e72dad6c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714946176756685413,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cpdww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cd95fb4-395f-40b0-ac69-985877734928,},Annotations:map[string]
string{io.kubernetes.container.hash: 3601c664,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ea3da7bad03d86b9ef31bb9735cec11ef284df54d515a6f6906c24b8e4311c7,PodSandboxId:cbc5b8b78dcd217d1a72bf21ed4500dc59b38089cd9ad7ddd1b862c40bc24be4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714946176682655527,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec4a14d9-48ae-4f7b-9f1c-2d17443a9abe,},Annotations:map[string]string{io.ku
bernetes.container.hash: c0a01775,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03073d2772bd2a7f14478d4361a75d75e3f0fbcfe076e864d01d053b846bea19,PodSandboxId:16c19bc428099ffb84d9f8635ed9f6281c0122750bd94de6de1570e5460c31a8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714946173024867485,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbd856791294522fc51ff5b62e7cd54b,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fef37118d160fb7416289be0385bebb9522627a84ea0821a6e1a5fe5da2c813,PodSandboxId:5426c6040d9cb314d8240d3313c84587611dde745ceb6fde3b31462003ef9f42,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714946173001893254,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2e468251ab344e0f881c437c1f0a903,},Annotations:map[string]string{io.kubernetes.container.hash: 41c7532d,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08a22997b781ac30b4bd928c16f4a5e38f5337b76ca9899db71f29d8edfd2a0c,PodSandboxId:62f801040656ab5246f61c3ef5e73d3d6db78b22ea1898643e348aba7107e64d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714946172984520067,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf972160d8d32eacd1f5a47e70108580,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0156d27216fa4c488bc142152c48c26b8fe0f7dd51cd40d07ab7b86679487f2d,PodSandboxId:39d26eeac3507e62e15ba216a92d6758a717a483486c9e194fca690fbac95a5f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714946172880117774,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f6a79907fce204a362a8fa77bf50a9,},Annotations:map[string]string{io.kubernetes.container.hash: e5f44add,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5da8dc883b84b13a00d7ddd7caced9b04f1b4fd587ba4727f9ea222bbdc7448c,PodSandboxId:a6044c7de56cf44e67d7336bdc0191bc14846e84cce141a5a0395377b69ff60f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714945864848857026,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cl7hp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fc46928d-642e-418f-9db6-c496cedab268,},Annotations:map[string]string{io.kubernetes.container.hash: 1fd86ba1,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:848a28f73e60cfa07ad217666af7c053f5c8bede9e3ee3935d24e4522fa3ecc0,PodSandboxId:fee2569ec6aa75279f17fcb1566e3fe2781d53455fef40438622e00187a8fd77,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714945815673792748,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec4a14d9-48ae-4f7b-9f1c-2d17443a9abe,},Annotations:map[string]string{io.kubernetes.container.hash: c0a01775,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b21f2ab80afb53329996f80118878f1af7b09f155fee5dc009087a64169de32d,PodSandboxId:5fda04970b0cc64a8463850f5ee2636112964bc5ec454aa8de47571bf93d708b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714945814455833158,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h7tbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11e1eeff-5441-44cd-94b0-b4a8b7773170,},Annotations:map[string]string{io.kubernetes.container.hash: 9d6250a4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43ae3bcd41585aec8645b9074511ba7e2c6162386f12f31e554687b448fe2ad2,PodSandboxId:ef9db81f865f272643499a8e2fcac3272fadfac4ca783a2a3aa364192748e609,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714945812603553395,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kbqkb,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: e2119b45-a792-4860-8906-6d4b422fa032,},Annotations:map[string]string{io.kubernetes.container.hash: 7fa514b1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2014ff87bd1ebe119fdb57654420ef50b567e385a498d454505b7bd8e029c60b,PodSandboxId:0fa7490de9d77ae139e7eed838891222f792885e55a3ccc27b94b7285801ea50,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714945812248205273,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cpdww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cd95fb4-395f-40b0-ac69
-985877734928,},Annotations:map[string]string{io.kubernetes.container.hash: 3601c664,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1b5f166a5cf3df5211e9fab19839dd1f65f15d8402aa2dfe1f66c30164d4eed,PodSandboxId:af74c44c9922dd31f1a88db886d1cdae1b1b7e660b9b70f84e705f12e9ec515c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714945793129600792,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbd856791294522fc51ff5b62e7cd54b,}
,Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cd2dc1892eb7cbb28aef581e7c5692c56ba211f18f9af26dc1fe0d0dd8402dc,PodSandboxId:adf73ed15a3d661567c1445806a1d1df39751b1ae5a0a9f2f91a62e6e199221d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714945793156946994,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2e468251ab344e0f881c437c1f0a903,},Annotations:map[string]string{io.kubernetes.
container.hash: 41c7532d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0e5121525f07b44bfc271cc8fb255bbd6cb4d46c6a240e2acbb2ef8f79411e9,PodSandboxId:812d6350fe96a1738c16f9474b8987ce4310622dcd7e9de22f99483c4601fa24,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714945793051911024,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f6a79907fce204a362a8fa77bf50a9,},Annotations:map[string]string{io.kubernetes.container.hash:
e5f44add,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e409273ba65ef4d25df83acbee19bccd69eb814951a8daf5f0bfee075503ac12,PodSandboxId:40300c2b4d51c791fa276ce6a11f3e9a063b13d10c325dd0dbd1d3c4926fc92c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714945792995840064,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf972160d8d32eacd1f5a47e70108580,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a7a5a949-dc98-4a52-bc6b-fd9fa9a03bee name=/runtime.v1.RuntimeService/ListContainers
	May 05 22:00:08 multinode-019621 crio[2853]: time="2024-05-05 22:00:08.589708524Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=19b896e2-1b6e-4aa6-956d-7e60b21e7da2 name=/runtime.v1.RuntimeService/Version
	May 05 22:00:08 multinode-019621 crio[2853]: time="2024-05-05 22:00:08.589787322Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=19b896e2-1b6e-4aa6-956d-7e60b21e7da2 name=/runtime.v1.RuntimeService/Version
	May 05 22:00:08 multinode-019621 crio[2853]: time="2024-05-05 22:00:08.590922926Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4eb3b4f3-d922-413f-8e84-d21bd5e86d76 name=/runtime.v1.ImageService/ImageFsInfo
	May 05 22:00:08 multinode-019621 crio[2853]: time="2024-05-05 22:00:08.591568708Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714946408591542627,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4eb3b4f3-d922-413f-8e84-d21bd5e86d76 name=/runtime.v1.ImageService/ImageFsInfo
	May 05 22:00:08 multinode-019621 crio[2853]: time="2024-05-05 22:00:08.592214846Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fa324ed5-05fa-4455-aa22-4dc88992ad32 name=/runtime.v1.RuntimeService/ListContainers
	May 05 22:00:08 multinode-019621 crio[2853]: time="2024-05-05 22:00:08.592267815Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fa324ed5-05fa-4455-aa22-4dc88992ad32 name=/runtime.v1.RuntimeService/ListContainers
	May 05 22:00:08 multinode-019621 crio[2853]: time="2024-05-05 22:00:08.592738623Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f38ee383fdf32ce314e35433b654e830e7250cea2d508b078521c284bc60924f,PodSandboxId:631e97f79659b007d136917a9d2f46140e23ecf80cb6f4f0ab2ad4a97d3db047,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714946210455842191,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cl7hp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fc46928d-642e-418f-9db6-c496cedab268,},Annotations:map[string]string{io.kubernetes.container.hash: 1fd86ba1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1c57f4a374d74eb1448cb10d635af2105a36b5f1e974454c4460ad7319c5b5b,PodSandboxId:e494359e7189b7ff5c60a0a8a37c1739ca008f20e7601d83c1c236085703e846,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714946176946596272,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kbqkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2119b45-a792-4860-8906-6d4b422fa032,},Annotations:map[string]string{io.kubernetes.container.hash: 7fa514b1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ca286dc16d8881c75da53df18d0329888f64922bf73d2db385591f8ce0a85b7,PodSandboxId:2585b130ba2ab56b797cd8b935b0e57fc8445015582d0dce2fa8531d2e9b53f6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714946176832636523,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h7tbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11e1eeff-5441-44cd-94b0-b4a8b7773170,},Annotations:map[string]string{io.kubernetes.container.hash: 9d6250a4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88a7ed5f5366d4904c962d6a5989c8bafc8e05b1e16b34cde7a98a2134a8a5f6,PodSandboxId:cacf3ad15dd83fd41913544962727b985e406c539993518f94654225e72dad6c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714946176756685413,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cpdww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cd95fb4-395f-40b0-ac69-985877734928,},Annotations:map[string]
string{io.kubernetes.container.hash: 3601c664,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ea3da7bad03d86b9ef31bb9735cec11ef284df54d515a6f6906c24b8e4311c7,PodSandboxId:cbc5b8b78dcd217d1a72bf21ed4500dc59b38089cd9ad7ddd1b862c40bc24be4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714946176682655527,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec4a14d9-48ae-4f7b-9f1c-2d17443a9abe,},Annotations:map[string]string{io.ku
bernetes.container.hash: c0a01775,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03073d2772bd2a7f14478d4361a75d75e3f0fbcfe076e864d01d053b846bea19,PodSandboxId:16c19bc428099ffb84d9f8635ed9f6281c0122750bd94de6de1570e5460c31a8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714946173024867485,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbd856791294522fc51ff5b62e7cd54b,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fef37118d160fb7416289be0385bebb9522627a84ea0821a6e1a5fe5da2c813,PodSandboxId:5426c6040d9cb314d8240d3313c84587611dde745ceb6fde3b31462003ef9f42,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714946173001893254,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2e468251ab344e0f881c437c1f0a903,},Annotations:map[string]string{io.kubernetes.container.hash: 41c7532d,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08a22997b781ac30b4bd928c16f4a5e38f5337b76ca9899db71f29d8edfd2a0c,PodSandboxId:62f801040656ab5246f61c3ef5e73d3d6db78b22ea1898643e348aba7107e64d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714946172984520067,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf972160d8d32eacd1f5a47e70108580,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0156d27216fa4c488bc142152c48c26b8fe0f7dd51cd40d07ab7b86679487f2d,PodSandboxId:39d26eeac3507e62e15ba216a92d6758a717a483486c9e194fca690fbac95a5f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714946172880117774,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f6a79907fce204a362a8fa77bf50a9,},Annotations:map[string]string{io.kubernetes.container.hash: e5f44add,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5da8dc883b84b13a00d7ddd7caced9b04f1b4fd587ba4727f9ea222bbdc7448c,PodSandboxId:a6044c7de56cf44e67d7336bdc0191bc14846e84cce141a5a0395377b69ff60f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714945864848857026,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cl7hp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fc46928d-642e-418f-9db6-c496cedab268,},Annotations:map[string]string{io.kubernetes.container.hash: 1fd86ba1,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:848a28f73e60cfa07ad217666af7c053f5c8bede9e3ee3935d24e4522fa3ecc0,PodSandboxId:fee2569ec6aa75279f17fcb1566e3fe2781d53455fef40438622e00187a8fd77,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714945815673792748,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec4a14d9-48ae-4f7b-9f1c-2d17443a9abe,},Annotations:map[string]string{io.kubernetes.container.hash: c0a01775,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b21f2ab80afb53329996f80118878f1af7b09f155fee5dc009087a64169de32d,PodSandboxId:5fda04970b0cc64a8463850f5ee2636112964bc5ec454aa8de47571bf93d708b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714945814455833158,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h7tbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11e1eeff-5441-44cd-94b0-b4a8b7773170,},Annotations:map[string]string{io.kubernetes.container.hash: 9d6250a4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43ae3bcd41585aec8645b9074511ba7e2c6162386f12f31e554687b448fe2ad2,PodSandboxId:ef9db81f865f272643499a8e2fcac3272fadfac4ca783a2a3aa364192748e609,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714945812603553395,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kbqkb,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: e2119b45-a792-4860-8906-6d4b422fa032,},Annotations:map[string]string{io.kubernetes.container.hash: 7fa514b1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2014ff87bd1ebe119fdb57654420ef50b567e385a498d454505b7bd8e029c60b,PodSandboxId:0fa7490de9d77ae139e7eed838891222f792885e55a3ccc27b94b7285801ea50,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714945812248205273,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cpdww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cd95fb4-395f-40b0-ac69
-985877734928,},Annotations:map[string]string{io.kubernetes.container.hash: 3601c664,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1b5f166a5cf3df5211e9fab19839dd1f65f15d8402aa2dfe1f66c30164d4eed,PodSandboxId:af74c44c9922dd31f1a88db886d1cdae1b1b7e660b9b70f84e705f12e9ec515c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714945793129600792,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbd856791294522fc51ff5b62e7cd54b,}
,Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cd2dc1892eb7cbb28aef581e7c5692c56ba211f18f9af26dc1fe0d0dd8402dc,PodSandboxId:adf73ed15a3d661567c1445806a1d1df39751b1ae5a0a9f2f91a62e6e199221d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714945793156946994,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2e468251ab344e0f881c437c1f0a903,},Annotations:map[string]string{io.kubernetes.
container.hash: 41c7532d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0e5121525f07b44bfc271cc8fb255bbd6cb4d46c6a240e2acbb2ef8f79411e9,PodSandboxId:812d6350fe96a1738c16f9474b8987ce4310622dcd7e9de22f99483c4601fa24,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714945793051911024,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f6a79907fce204a362a8fa77bf50a9,},Annotations:map[string]string{io.kubernetes.container.hash:
e5f44add,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e409273ba65ef4d25df83acbee19bccd69eb814951a8daf5f0bfee075503ac12,PodSandboxId:40300c2b4d51c791fa276ce6a11f3e9a063b13d10c325dd0dbd1d3c4926fc92c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714945792995840064,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf972160d8d32eacd1f5a47e70108580,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fa324ed5-05fa-4455-aa22-4dc88992ad32 name=/runtime.v1.RuntimeService/ListContainers
	May 05 22:00:08 multinode-019621 crio[2853]: time="2024-05-05 22:00:08.642228909Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8bbe0b48-9cdf-4dbe-9cec-19aec8e0cc65 name=/runtime.v1.RuntimeService/Version
	May 05 22:00:08 multinode-019621 crio[2853]: time="2024-05-05 22:00:08.642304071Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8bbe0b48-9cdf-4dbe-9cec-19aec8e0cc65 name=/runtime.v1.RuntimeService/Version
	May 05 22:00:08 multinode-019621 crio[2853]: time="2024-05-05 22:00:08.643779301Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=223b6feb-7bc7-4764-8f2f-8e2df32c8e3b name=/runtime.v1.ImageService/ImageFsInfo
	May 05 22:00:08 multinode-019621 crio[2853]: time="2024-05-05 22:00:08.644255256Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714946408644228712,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=223b6feb-7bc7-4764-8f2f-8e2df32c8e3b name=/runtime.v1.ImageService/ImageFsInfo
	May 05 22:00:08 multinode-019621 crio[2853]: time="2024-05-05 22:00:08.645017816Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a7b6169f-1336-44e7-b54c-6804fed4dca2 name=/runtime.v1.RuntimeService/ListContainers
	May 05 22:00:08 multinode-019621 crio[2853]: time="2024-05-05 22:00:08.645075428Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a7b6169f-1336-44e7-b54c-6804fed4dca2 name=/runtime.v1.RuntimeService/ListContainers
	May 05 22:00:08 multinode-019621 crio[2853]: time="2024-05-05 22:00:08.645520031Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f38ee383fdf32ce314e35433b654e830e7250cea2d508b078521c284bc60924f,PodSandboxId:631e97f79659b007d136917a9d2f46140e23ecf80cb6f4f0ab2ad4a97d3db047,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714946210455842191,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cl7hp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fc46928d-642e-418f-9db6-c496cedab268,},Annotations:map[string]string{io.kubernetes.container.hash: 1fd86ba1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1c57f4a374d74eb1448cb10d635af2105a36b5f1e974454c4460ad7319c5b5b,PodSandboxId:e494359e7189b7ff5c60a0a8a37c1739ca008f20e7601d83c1c236085703e846,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714946176946596272,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kbqkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2119b45-a792-4860-8906-6d4b422fa032,},Annotations:map[string]string{io.kubernetes.container.hash: 7fa514b1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ca286dc16d8881c75da53df18d0329888f64922bf73d2db385591f8ce0a85b7,PodSandboxId:2585b130ba2ab56b797cd8b935b0e57fc8445015582d0dce2fa8531d2e9b53f6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714946176832636523,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h7tbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11e1eeff-5441-44cd-94b0-b4a8b7773170,},Annotations:map[string]string{io.kubernetes.container.hash: 9d6250a4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88a7ed5f5366d4904c962d6a5989c8bafc8e05b1e16b34cde7a98a2134a8a5f6,PodSandboxId:cacf3ad15dd83fd41913544962727b985e406c539993518f94654225e72dad6c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714946176756685413,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cpdww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cd95fb4-395f-40b0-ac69-985877734928,},Annotations:map[string]
string{io.kubernetes.container.hash: 3601c664,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ea3da7bad03d86b9ef31bb9735cec11ef284df54d515a6f6906c24b8e4311c7,PodSandboxId:cbc5b8b78dcd217d1a72bf21ed4500dc59b38089cd9ad7ddd1b862c40bc24be4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714946176682655527,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec4a14d9-48ae-4f7b-9f1c-2d17443a9abe,},Annotations:map[string]string{io.ku
bernetes.container.hash: c0a01775,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03073d2772bd2a7f14478d4361a75d75e3f0fbcfe076e864d01d053b846bea19,PodSandboxId:16c19bc428099ffb84d9f8635ed9f6281c0122750bd94de6de1570e5460c31a8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714946173024867485,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbd856791294522fc51ff5b62e7cd54b,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fef37118d160fb7416289be0385bebb9522627a84ea0821a6e1a5fe5da2c813,PodSandboxId:5426c6040d9cb314d8240d3313c84587611dde745ceb6fde3b31462003ef9f42,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714946173001893254,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2e468251ab344e0f881c437c1f0a903,},Annotations:map[string]string{io.kubernetes.container.hash: 41c7532d,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08a22997b781ac30b4bd928c16f4a5e38f5337b76ca9899db71f29d8edfd2a0c,PodSandboxId:62f801040656ab5246f61c3ef5e73d3d6db78b22ea1898643e348aba7107e64d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714946172984520067,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf972160d8d32eacd1f5a47e70108580,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0156d27216fa4c488bc142152c48c26b8fe0f7dd51cd40d07ab7b86679487f2d,PodSandboxId:39d26eeac3507e62e15ba216a92d6758a717a483486c9e194fca690fbac95a5f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714946172880117774,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f6a79907fce204a362a8fa77bf50a9,},Annotations:map[string]string{io.kubernetes.container.hash: e5f44add,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5da8dc883b84b13a00d7ddd7caced9b04f1b4fd587ba4727f9ea222bbdc7448c,PodSandboxId:a6044c7de56cf44e67d7336bdc0191bc14846e84cce141a5a0395377b69ff60f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714945864848857026,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-cl7hp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fc46928d-642e-418f-9db6-c496cedab268,},Annotations:map[string]string{io.kubernetes.container.hash: 1fd86ba1,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:848a28f73e60cfa07ad217666af7c053f5c8bede9e3ee3935d24e4522fa3ecc0,PodSandboxId:fee2569ec6aa75279f17fcb1566e3fe2781d53455fef40438622e00187a8fd77,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714945815673792748,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec4a14d9-48ae-4f7b-9f1c-2d17443a9abe,},Annotations:map[string]string{io.kubernetes.container.hash: c0a01775,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b21f2ab80afb53329996f80118878f1af7b09f155fee5dc009087a64169de32d,PodSandboxId:5fda04970b0cc64a8463850f5ee2636112964bc5ec454aa8de47571bf93d708b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714945814455833158,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h7tbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11e1eeff-5441-44cd-94b0-b4a8b7773170,},Annotations:map[string]string{io.kubernetes.container.hash: 9d6250a4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43ae3bcd41585aec8645b9074511ba7e2c6162386f12f31e554687b448fe2ad2,PodSandboxId:ef9db81f865f272643499a8e2fcac3272fadfac4ca783a2a3aa364192748e609,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714945812603553395,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kbqkb,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: e2119b45-a792-4860-8906-6d4b422fa032,},Annotations:map[string]string{io.kubernetes.container.hash: 7fa514b1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2014ff87bd1ebe119fdb57654420ef50b567e385a498d454505b7bd8e029c60b,PodSandboxId:0fa7490de9d77ae139e7eed838891222f792885e55a3ccc27b94b7285801ea50,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714945812248205273,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cpdww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cd95fb4-395f-40b0-ac69
-985877734928,},Annotations:map[string]string{io.kubernetes.container.hash: 3601c664,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1b5f166a5cf3df5211e9fab19839dd1f65f15d8402aa2dfe1f66c30164d4eed,PodSandboxId:af74c44c9922dd31f1a88db886d1cdae1b1b7e660b9b70f84e705f12e9ec515c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714945793129600792,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbd856791294522fc51ff5b62e7cd54b,}
,Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cd2dc1892eb7cbb28aef581e7c5692c56ba211f18f9af26dc1fe0d0dd8402dc,PodSandboxId:adf73ed15a3d661567c1445806a1d1df39751b1ae5a0a9f2f91a62e6e199221d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714945793156946994,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2e468251ab344e0f881c437c1f0a903,},Annotations:map[string]string{io.kubernetes.
container.hash: 41c7532d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0e5121525f07b44bfc271cc8fb255bbd6cb4d46c6a240e2acbb2ef8f79411e9,PodSandboxId:812d6350fe96a1738c16f9474b8987ce4310622dcd7e9de22f99483c4601fa24,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714945793051911024,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87f6a79907fce204a362a8fa77bf50a9,},Annotations:map[string]string{io.kubernetes.container.hash:
e5f44add,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e409273ba65ef4d25df83acbee19bccd69eb814951a8daf5f0bfee075503ac12,PodSandboxId:40300c2b4d51c791fa276ce6a11f3e9a063b13d10c325dd0dbd1d3c4926fc92c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714945792995840064,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-019621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf972160d8d32eacd1f5a47e70108580,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a7b6169f-1336-44e7-b54c-6804fed4dca2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f38ee383fdf32       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   631e97f79659b       busybox-fc5497c4f-cl7hp
	d1c57f4a374d7       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      3 minutes ago       Running             kindnet-cni               1                   e494359e7189b       kindnet-kbqkb
	3ca286dc16d88       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago       Running             coredns                   1                   2585b130ba2ab       coredns-7db6d8ff4d-h7tbh
	88a7ed5f5366d       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      3 minutes ago       Running             kube-proxy                1                   cacf3ad15dd83       kube-proxy-cpdww
	7ea3da7bad03d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       1                   cbc5b8b78dcd2       storage-provisioner
	03073d2772bd2       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      3 minutes ago       Running             kube-scheduler            1                   16c19bc428099       kube-scheduler-multinode-019621
	4fef37118d160       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      3 minutes ago       Running             etcd                      1                   5426c6040d9cb       etcd-multinode-019621
	08a22997b781a       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      3 minutes ago       Running             kube-controller-manager   1                   62f801040656a       kube-controller-manager-multinode-019621
	0156d27216fa4       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      3 minutes ago       Running             kube-apiserver            1                   39d26eeac3507       kube-apiserver-multinode-019621
	5da8dc883b84b       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   a6044c7de56cf       busybox-fc5497c4f-cl7hp
	848a28f73e60c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Exited              storage-provisioner       0                   fee2569ec6aa7       storage-provisioner
	b21f2ab80afb5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      9 minutes ago       Exited              coredns                   0                   5fda04970b0cc       coredns-7db6d8ff4d-h7tbh
	43ae3bcd41585       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      9 minutes ago       Exited              kindnet-cni               0                   ef9db81f865f2       kindnet-kbqkb
	2014ff87bd1eb       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      9 minutes ago       Exited              kube-proxy                0                   0fa7490de9d77       kube-proxy-cpdww
	5cd2dc1892eb7       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      10 minutes ago      Exited              etcd                      0                   adf73ed15a3d6       etcd-multinode-019621
	b1b5f166a5cf3       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      10 minutes ago      Exited              kube-scheduler            0                   af74c44c9922d       kube-scheduler-multinode-019621
	f0e5121525f07       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      10 minutes ago      Exited              kube-apiserver            0                   812d6350fe96a       kube-apiserver-multinode-019621
	e409273ba65ef       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      10 minutes ago      Exited              kube-controller-manager   0                   40300c2b4d51c       kube-controller-manager-multinode-019621
	
	
	==> coredns [3ca286dc16d8881c75da53df18d0329888f64922bf73d2db385591f8ce0a85b7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54751 - 3145 "HINFO IN 7127606931689558220.7006574501752443575. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020443773s
	
	
	==> coredns [b21f2ab80afb53329996f80118878f1af7b09f155fee5dc009087a64169de32d] <==
	[INFO] 10.244.1.2:48214 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001453962s
	[INFO] 10.244.1.2:41359 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000127209s
	[INFO] 10.244.1.2:43048 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000078756s
	[INFO] 10.244.1.2:55869 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001291146s
	[INFO] 10.244.1.2:40410 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00011286s
	[INFO] 10.244.1.2:38218 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000101575s
	[INFO] 10.244.1.2:54239 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000089288s
	[INFO] 10.244.0.3:51811 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142086s
	[INFO] 10.244.0.3:46046 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000172671s
	[INFO] 10.244.0.3:54744 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000113072s
	[INFO] 10.244.0.3:41169 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000043167s
	[INFO] 10.244.1.2:56458 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164591s
	[INFO] 10.244.1.2:55611 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00011122s
	[INFO] 10.244.1.2:58210 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000104363s
	[INFO] 10.244.1.2:37837 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000072275s
	[INFO] 10.244.0.3:41990 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121911s
	[INFO] 10.244.0.3:58160 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000184697s
	[INFO] 10.244.0.3:53311 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000276803s
	[INFO] 10.244.0.3:35251 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00013276s
	[INFO] 10.244.1.2:34759 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000290898s
	[INFO] 10.244.1.2:51120 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000177281s
	[INFO] 10.244.1.2:47270 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000166876s
	[INFO] 10.244.1.2:47064 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000111541s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-019621
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-019621
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=182cbbc99574885c654f8e32902368a71f76ddd3
	                    minikube.k8s.io/name=multinode-019621
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_05T21_49_59_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 05 May 2024 21:49:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-019621
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 05 May 2024 21:59:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 05 May 2024 21:56:16 +0000   Sun, 05 May 2024 21:49:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 05 May 2024 21:56:16 +0000   Sun, 05 May 2024 21:49:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 05 May 2024 21:56:16 +0000   Sun, 05 May 2024 21:49:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 05 May 2024 21:56:16 +0000   Sun, 05 May 2024 21:50:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.30
	  Hostname:    multinode-019621
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b4df3d406bb44e15a4bddf8a7d93deb5
	  System UUID:                b4df3d40-6bb4-4e15-a4bd-df8a7d93deb5
	  Boot ID:                    7bb3c348-b8ac-4623-b778-6e10b769905e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-cl7hp                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	  kube-system                 coredns-7db6d8ff4d-h7tbh                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m56s
	  kube-system                 etcd-multinode-019621                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-kbqkb                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m57s
	  kube-system                 kube-apiserver-multinode-019621             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-multinode-019621    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-cpdww                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m57s
	  kube-system                 kube-scheduler-multinode-019621             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m56s                  kube-proxy       
	  Normal  Starting                 3m51s                  kube-proxy       
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node multinode-019621 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node multinode-019621 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node multinode-019621 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           9m58s                  node-controller  Node multinode-019621 event: Registered Node multinode-019621 in Controller
	  Normal  NodeReady                9m55s                  kubelet          Node multinode-019621 status is now: NodeReady
	  Normal  Starting                 3m56s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m56s (x8 over 3m56s)  kubelet          Node multinode-019621 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m56s (x8 over 3m56s)  kubelet          Node multinode-019621 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m56s (x7 over 3m56s)  kubelet          Node multinode-019621 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m56s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m40s                  node-controller  Node multinode-019621 event: Registered Node multinode-019621 in Controller
	
	
	Name:               multinode-019621-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-019621-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=182cbbc99574885c654f8e32902368a71f76ddd3
	                    minikube.k8s.io/name=multinode-019621
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_05T21_56_59_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 05 May 2024 21:56:59 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-019621-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 05 May 2024 21:57:40 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sun, 05 May 2024 21:57:29 +0000   Sun, 05 May 2024 21:58:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sun, 05 May 2024 21:57:29 +0000   Sun, 05 May 2024 21:58:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sun, 05 May 2024 21:57:29 +0000   Sun, 05 May 2024 21:58:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sun, 05 May 2024 21:57:29 +0000   Sun, 05 May 2024 21:58:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.242
	  Hostname:    multinode-019621-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0f76a590973448cab36871d0ae884056
	  System UUID:                0f76a590-9734-48ca-b368-71d0ae884056
	  Boot ID:                    e2002c0e-5840-4247-b771-41a76f27395e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-58lzm    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m14s
	  kube-system                 kindnet-4d86k              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m19s
	  kube-system                 kube-proxy-fvqcb           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m14s                  kube-proxy       
	  Normal  Starting                 3m5s                   kube-proxy       
	  Normal  NodeAllocatableEnforced  9m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m19s (x2 over 9m19s)  kubelet          Node multinode-019621-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m19s (x2 over 9m19s)  kubelet          Node multinode-019621-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m19s (x2 over 9m19s)  kubelet          Node multinode-019621-m02 status is now: NodeHasSufficientPID
	  Normal  Starting                 9m19s                  kubelet          Starting kubelet.
	  Normal  NodeReady                9m9s                   kubelet          Node multinode-019621-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m9s (x2 over 3m10s)   kubelet          Node multinode-019621-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m9s (x2 over 3m10s)   kubelet          Node multinode-019621-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m9s (x2 over 3m10s)   kubelet          Node multinode-019621-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m9s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m5s                   node-controller  Node multinode-019621-m02 event: Registered Node multinode-019621-m02 in Controller
	  Normal  NodeReady                3m1s                   kubelet          Node multinode-019621-m02 status is now: NodeReady
	  Normal  NodeNotReady             105s                   node-controller  Node multinode-019621-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.055833] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059481] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.185958] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.145941] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.282937] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.882997] systemd-fstab-generator[765]: Ignoring "noauto" option for root device
	[  +0.068390] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.588834] systemd-fstab-generator[950]: Ignoring "noauto" option for root device
	[  +0.660409] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.909630] systemd-fstab-generator[1279]: Ignoring "noauto" option for root device
	[  +0.094283] kauditd_printk_skb: 41 callbacks suppressed
	[May 5 21:50] kauditd_printk_skb: 21 callbacks suppressed
	[  +0.101601] systemd-fstab-generator[1514]: Ignoring "noauto" option for root device
	[May 5 21:51] kauditd_printk_skb: 84 callbacks suppressed
	[May 5 21:56] systemd-fstab-generator[2771]: Ignoring "noauto" option for root device
	[  +0.151412] systemd-fstab-generator[2784]: Ignoring "noauto" option for root device
	[  +0.192792] systemd-fstab-generator[2798]: Ignoring "noauto" option for root device
	[  +0.136857] systemd-fstab-generator[2810]: Ignoring "noauto" option for root device
	[  +0.310207] systemd-fstab-generator[2838]: Ignoring "noauto" option for root device
	[  +1.101670] systemd-fstab-generator[2936]: Ignoring "noauto" option for root device
	[  +2.268051] systemd-fstab-generator[3060]: Ignoring "noauto" option for root device
	[  +0.943417] kauditd_printk_skb: 154 callbacks suppressed
	[ +16.176838] kauditd_printk_skb: 62 callbacks suppressed
	[  +3.105310] systemd-fstab-generator[3876]: Ignoring "noauto" option for root device
	[ +18.235404] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [4fef37118d160fb7416289be0385bebb9522627a84ea0821a6e1a5fe5da2c813] <==
	{"level":"info","ts":"2024-05-05T21:56:13.689109Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-05T21:56:13.680441Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"404c942cebf80710 switched to configuration voters=(4633241037315770128)"}
	{"level":"info","ts":"2024-05-05T21:56:13.689858Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ae8b7a508f3fd394","local-member-id":"404c942cebf80710","added-peer-id":"404c942cebf80710","added-peer-peer-urls":["https://192.168.39.30:2380"]}
	{"level":"info","ts":"2024-05-05T21:56:13.69011Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-05T21:56:13.700082Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ae8b7a508f3fd394","local-member-id":"404c942cebf80710","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-05T21:56:13.703588Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-05T21:56:13.727837Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-05T21:56:13.729332Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"404c942cebf80710","initial-advertise-peer-urls":["https://192.168.39.30:2380"],"listen-peer-urls":["https://192.168.39.30:2380"],"advertise-client-urls":["https://192.168.39.30:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.30:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-05T21:56:13.733844Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-05T21:56:13.728151Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.30:2380"}
	{"level":"info","ts":"2024-05-05T21:56:13.746536Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.30:2380"}
	{"level":"info","ts":"2024-05-05T21:56:14.598624Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"404c942cebf80710 is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-05T21:56:14.598703Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"404c942cebf80710 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-05T21:56:14.598753Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"404c942cebf80710 received MsgPreVoteResp from 404c942cebf80710 at term 2"}
	{"level":"info","ts":"2024-05-05T21:56:14.598776Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"404c942cebf80710 became candidate at term 3"}
	{"level":"info","ts":"2024-05-05T21:56:14.598782Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"404c942cebf80710 received MsgVoteResp from 404c942cebf80710 at term 3"}
	{"level":"info","ts":"2024-05-05T21:56:14.598861Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"404c942cebf80710 became leader at term 3"}
	{"level":"info","ts":"2024-05-05T21:56:14.598872Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 404c942cebf80710 elected leader 404c942cebf80710 at term 3"}
	{"level":"info","ts":"2024-05-05T21:56:14.605995Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"404c942cebf80710","local-member-attributes":"{Name:multinode-019621 ClientURLs:[https://192.168.39.30:2379]}","request-path":"/0/members/404c942cebf80710/attributes","cluster-id":"ae8b7a508f3fd394","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-05T21:56:14.606061Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-05T21:56:14.60647Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-05T21:56:14.606492Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-05T21:56:14.606509Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-05T21:56:14.608561Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.30:2379"}
	{"level":"info","ts":"2024-05-05T21:56:14.608662Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [5cd2dc1892eb7cbb28aef581e7c5692c56ba211f18f9af26dc1fe0d0dd8402dc] <==
	{"level":"info","ts":"2024-05-05T21:51:38.285822Z","caller":"traceutil/trace.go:171","msg":"trace[1306098951] linearizableReadLoop","detail":"{readStateIndex:619; appliedIndex:618; }","duration":"135.989501ms","start":"2024-05-05T21:51:38.149803Z","end":"2024-05-05T21:51:38.285792Z","steps":["trace[1306098951] 'read index received'  (duration: 128.379785ms)","trace[1306098951] 'applied index is now lower than readState.Index'  (duration: 7.608728ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-05T21:51:38.286289Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.414159ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges/kube-system/\" range_end:\"/registry/limitranges/kube-system0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-05T21:51:38.286438Z","caller":"traceutil/trace.go:171","msg":"trace[1702837299] range","detail":"{range_begin:/registry/limitranges/kube-system/; range_end:/registry/limitranges/kube-system0; response_count:0; response_revision:587; }","duration":"136.584359ms","start":"2024-05-05T21:51:38.149778Z","end":"2024-05-05T21:51:38.286363Z","steps":["trace[1702837299] 'agreement among raft nodes before linearized reading'  (duration: 136.126175ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-05T21:51:38.286512Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"134.670428ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-019621-m03\" ","response":"range_response_count:1 size:2095"}
	{"level":"info","ts":"2024-05-05T21:51:38.28657Z","caller":"traceutil/trace.go:171","msg":"trace[145520545] range","detail":"{range_begin:/registry/minions/multinode-019621-m03; range_end:; response_count:1; response_revision:589; }","duration":"134.745395ms","start":"2024-05-05T21:51:38.151816Z","end":"2024-05-05T21:51:38.286561Z","steps":["trace[145520545] 'agreement among raft nodes before linearized reading'  (duration: 134.654946ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-05T21:51:38.286754Z","caller":"traceutil/trace.go:171","msg":"trace[1674215034] transaction","detail":"{read_only:false; response_revision:588; number_of_response:1; }","duration":"136.879635ms","start":"2024-05-05T21:51:38.149861Z","end":"2024-05-05T21:51:38.286741Z","steps":["trace[1674215034] 'process raft request'  (duration: 136.413469ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-05T21:51:38.286925Z","caller":"traceutil/trace.go:171","msg":"trace[1174492820] transaction","detail":"{read_only:false; number_of_response:1; response_revision:588; }","duration":"137.035118ms","start":"2024-05-05T21:51:38.149882Z","end":"2024-05-05T21:51:38.286917Z","steps":["trace[1174492820] 'process raft request'  (duration: 136.446283ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-05T21:51:38.287156Z","caller":"traceutil/trace.go:171","msg":"trace[1999143151] transaction","detail":"{read_only:false; response_revision:589; number_of_response:1; }","duration":"123.245258ms","start":"2024-05-05T21:51:38.163903Z","end":"2024-05-05T21:51:38.287148Z","steps":["trace[1999143151] 'process raft request'  (duration: 122.44443ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-05T21:51:43.346092Z","caller":"traceutil/trace.go:171","msg":"trace[927855184] transaction","detail":"{read_only:false; response_revision:624; number_of_response:1; }","duration":"241.900586ms","start":"2024-05-05T21:51:43.104145Z","end":"2024-05-05T21:51:43.346045Z","steps":["trace[927855184] 'process raft request'  (duration: 179.182139ms)","trace[927855184] 'compare'  (duration: 62.27827ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-05T21:51:43.707172Z","caller":"traceutil/trace.go:171","msg":"trace[1755155320] linearizableReadLoop","detail":"{readStateIndex:660; appliedIndex:659; }","duration":"346.183647ms","start":"2024-05-05T21:51:43.360967Z","end":"2024-05-05T21:51:43.707151Z","steps":["trace[1755155320] 'read index received'  (duration: 254.17349ms)","trace[1755155320] 'applied index is now lower than readState.Index'  (duration: 92.009215ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-05T21:51:43.707331Z","caller":"traceutil/trace.go:171","msg":"trace[68441250] transaction","detail":"{read_only:false; response_revision:625; number_of_response:1; }","duration":"355.683012ms","start":"2024-05-05T21:51:43.351633Z","end":"2024-05-05T21:51:43.707316Z","steps":["trace[68441250] 'process raft request'  (duration: 263.560753ms)","trace[68441250] 'compare'  (duration: 91.712784ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-05T21:51:43.707591Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-05T21:51:43.351617Z","time spent":"355.868846ms","remote":"127.0.0.1:46604","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2880,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/daemonsets/kube-system/kube-proxy\" mod_revision:600 > success:<request_put:<key:\"/registry/daemonsets/kube-system/kube-proxy\" value_size:2829 >> failure:<request_range:<key:\"/registry/daemonsets/kube-system/kube-proxy\" > >"}
	{"level":"warn","ts":"2024-05-05T21:51:43.707721Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"346.747963ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-019621-m03\" ","response":"range_response_count:1 size:2953"}
	{"level":"info","ts":"2024-05-05T21:51:43.707806Z","caller":"traceutil/trace.go:171","msg":"trace[2015094061] range","detail":"{range_begin:/registry/minions/multinode-019621-m03; range_end:; response_count:1; response_revision:625; }","duration":"346.850346ms","start":"2024-05-05T21:51:43.360945Z","end":"2024-05-05T21:51:43.707795Z","steps":["trace[2015094061] 'agreement among raft nodes before linearized reading'  (duration: 346.410921ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-05T21:51:43.707863Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-05T21:51:43.360932Z","time spent":"346.92034ms","remote":"127.0.0.1:46306","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":1,"response size":2976,"request content":"key:\"/registry/minions/multinode-019621-m03\" "}
	{"level":"info","ts":"2024-05-05T21:54:36.263717Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-05-05T21:54:36.26392Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-019621","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.30:2380"],"advertise-client-urls":["https://192.168.39.30:2379"]}
	{"level":"warn","ts":"2024-05-05T21:54:36.264046Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.30:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-05T21:54:36.264099Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.30:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-05T21:54:36.264192Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-05T21:54:36.264273Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-05-05T21:54:36.318779Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"404c942cebf80710","current-leader-member-id":"404c942cebf80710"}
	{"level":"info","ts":"2024-05-05T21:54:36.321484Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.30:2380"}
	{"level":"info","ts":"2024-05-05T21:54:36.321668Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.30:2380"}
	{"level":"info","ts":"2024-05-05T21:54:36.321709Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-019621","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.30:2380"],"advertise-client-urls":["https://192.168.39.30:2379"]}
	
	
	==> kernel <==
	 22:00:09 up 10 min,  0 users,  load average: 0.02, 0.15, 0.12
	Linux multinode-019621 5.10.207 #1 SMP Tue Apr 30 22:38:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [43ae3bcd41585aec8645b9074511ba7e2c6162386f12f31e554687b448fe2ad2] <==
	I0505 21:53:53.839129       1 main.go:250] Node multinode-019621-m03 has CIDR [10.244.3.0/24] 
	I0505 21:54:03.852205       1 main.go:223] Handling node with IPs: map[192.168.39.30:{}]
	I0505 21:54:03.852260       1 main.go:227] handling current node
	I0505 21:54:03.852271       1 main.go:223] Handling node with IPs: map[192.168.39.242:{}]
	I0505 21:54:03.852277       1 main.go:250] Node multinode-019621-m02 has CIDR [10.244.1.0/24] 
	I0505 21:54:03.852457       1 main.go:223] Handling node with IPs: map[192.168.39.246:{}]
	I0505 21:54:03.852495       1 main.go:250] Node multinode-019621-m03 has CIDR [10.244.3.0/24] 
	I0505 21:54:13.866962       1 main.go:223] Handling node with IPs: map[192.168.39.30:{}]
	I0505 21:54:13.867092       1 main.go:227] handling current node
	I0505 21:54:13.867136       1 main.go:223] Handling node with IPs: map[192.168.39.242:{}]
	I0505 21:54:13.867170       1 main.go:250] Node multinode-019621-m02 has CIDR [10.244.1.0/24] 
	I0505 21:54:13.867314       1 main.go:223] Handling node with IPs: map[192.168.39.246:{}]
	I0505 21:54:13.867334       1 main.go:250] Node multinode-019621-m03 has CIDR [10.244.3.0/24] 
	I0505 21:54:23.883586       1 main.go:223] Handling node with IPs: map[192.168.39.30:{}]
	I0505 21:54:23.883757       1 main.go:227] handling current node
	I0505 21:54:23.883793       1 main.go:223] Handling node with IPs: map[192.168.39.242:{}]
	I0505 21:54:23.883821       1 main.go:250] Node multinode-019621-m02 has CIDR [10.244.1.0/24] 
	I0505 21:54:23.884120       1 main.go:223] Handling node with IPs: map[192.168.39.246:{}]
	I0505 21:54:23.884211       1 main.go:250] Node multinode-019621-m03 has CIDR [10.244.3.0/24] 
	I0505 21:54:33.889531       1 main.go:223] Handling node with IPs: map[192.168.39.30:{}]
	I0505 21:54:33.889613       1 main.go:227] handling current node
	I0505 21:54:33.889635       1 main.go:223] Handling node with IPs: map[192.168.39.242:{}]
	I0505 21:54:33.889652       1 main.go:250] Node multinode-019621-m02 has CIDR [10.244.1.0/24] 
	I0505 21:54:33.889757       1 main.go:223] Handling node with IPs: map[192.168.39.246:{}]
	I0505 21:54:33.889777       1 main.go:250] Node multinode-019621-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [d1c57f4a374d74eb1448cb10d635af2105a36b5f1e974454c4460ad7319c5b5b] <==
	I0505 21:59:08.016544       1 main.go:250] Node multinode-019621-m02 has CIDR [10.244.1.0/24] 
	I0505 21:59:18.061267       1 main.go:223] Handling node with IPs: map[192.168.39.30:{}]
	I0505 21:59:18.061320       1 main.go:227] handling current node
	I0505 21:59:18.061331       1 main.go:223] Handling node with IPs: map[192.168.39.242:{}]
	I0505 21:59:18.061337       1 main.go:250] Node multinode-019621-m02 has CIDR [10.244.1.0/24] 
	I0505 21:59:28.066335       1 main.go:223] Handling node with IPs: map[192.168.39.30:{}]
	I0505 21:59:28.066560       1 main.go:227] handling current node
	I0505 21:59:28.066651       1 main.go:223] Handling node with IPs: map[192.168.39.242:{}]
	I0505 21:59:28.066709       1 main.go:250] Node multinode-019621-m02 has CIDR [10.244.1.0/24] 
	I0505 21:59:38.076171       1 main.go:223] Handling node with IPs: map[192.168.39.30:{}]
	I0505 21:59:38.076273       1 main.go:227] handling current node
	I0505 21:59:38.076300       1 main.go:223] Handling node with IPs: map[192.168.39.242:{}]
	I0505 21:59:38.076319       1 main.go:250] Node multinode-019621-m02 has CIDR [10.244.1.0/24] 
	I0505 21:59:48.090449       1 main.go:223] Handling node with IPs: map[192.168.39.30:{}]
	I0505 21:59:48.090580       1 main.go:227] handling current node
	I0505 21:59:48.090665       1 main.go:223] Handling node with IPs: map[192.168.39.242:{}]
	I0505 21:59:48.090700       1 main.go:250] Node multinode-019621-m02 has CIDR [10.244.1.0/24] 
	I0505 21:59:58.103001       1 main.go:223] Handling node with IPs: map[192.168.39.30:{}]
	I0505 21:59:58.103151       1 main.go:227] handling current node
	I0505 21:59:58.103163       1 main.go:223] Handling node with IPs: map[192.168.39.242:{}]
	I0505 21:59:58.103288       1 main.go:250] Node multinode-019621-m02 has CIDR [10.244.1.0/24] 
	I0505 22:00:08.109658       1 main.go:223] Handling node with IPs: map[192.168.39.30:{}]
	I0505 22:00:08.109719       1 main.go:227] handling current node
	I0505 22:00:08.109751       1 main.go:223] Handling node with IPs: map[192.168.39.242:{}]
	I0505 22:00:08.109758       1 main.go:250] Node multinode-019621-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [0156d27216fa4c488bc142152c48c26b8fe0f7dd51cd40d07ab7b86679487f2d] <==
	I0505 21:56:16.019987       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0505 21:56:16.020070       1 shared_informer.go:320] Caches are synced for configmaps
	I0505 21:56:16.021686       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0505 21:56:16.021727       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0505 21:56:16.021849       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0505 21:56:16.029152       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0505 21:56:16.030672       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0505 21:56:16.030756       1 aggregator.go:165] initial CRD sync complete...
	I0505 21:56:16.030780       1 autoregister_controller.go:141] Starting autoregister controller
	I0505 21:56:16.030802       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0505 21:56:16.030824       1 cache.go:39] Caches are synced for autoregister controller
	I0505 21:56:16.034447       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0505 21:56:16.055478       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0505 21:56:16.077593       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0505 21:56:16.099643       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0505 21:56:16.099718       1 policy_source.go:224] refreshing policies
	I0505 21:56:16.122741       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0505 21:56:16.924891       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0505 21:56:17.848103       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0505 21:56:17.969505       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0505 21:56:17.980985       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0505 21:56:18.061789       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0505 21:56:18.072181       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0505 21:56:28.940038       1 controller.go:615] quota admission added evaluator for: endpoints
	I0505 21:56:29.031449       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [f0e5121525f07b44bfc271cc8fb255bbd6cb4d46c6a240e2acbb2ef8f79411e9] <==
	W0505 21:54:36.299255       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:54:36.299299       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:54:36.299324       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:54:36.299353       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:54:36.299508       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:54:36.299549       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:54:36.299580       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:54:36.299611       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:54:36.299643       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:54:36.299668       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:54:36.299692       1 logging.go:59] [core] [Channel #10 SubChannel #11] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:54:36.299718       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:54:36.299746       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:54:36.299771       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:54:36.299801       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:54:36.299832       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:54:36.299856       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:54:36.299886       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:54:36.299931       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:54:36.299955       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:54:36.299977       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:54:36.300003       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:54:36.300026       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:54:36.300072       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [08a22997b781ac30b4bd928c16f4a5e38f5337b76ca9899db71f29d8edfd2a0c] <==
	I0505 21:56:59.092209       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-019621-m02\" does not exist"
	I0505 21:56:59.108770       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-019621-m02" podCIDRs=["10.244.1.0/24"]
	I0505 21:57:01.007815       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="75.791µs"
	I0505 21:57:01.021962       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.023µs"
	I0505 21:57:01.030630       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.172µs"
	I0505 21:57:01.051953       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.578µs"
	I0505 21:57:01.060979       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.413µs"
	I0505 21:57:01.065134       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.403µs"
	I0505 21:57:07.976483       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-019621-m02"
	I0505 21:57:08.001847       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.835µs"
	I0505 21:57:08.018782       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.992µs"
	I0505 21:57:11.534885       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.756575ms"
	I0505 21:57:11.535073       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="135.563µs"
	I0505 21:57:30.567542       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-019621-m02"
	I0505 21:57:31.836710       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-019621-m03\" does not exist"
	I0505 21:57:31.836769       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-019621-m02"
	I0505 21:57:31.862879       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-019621-m03" podCIDRs=["10.244.2.0/24"]
	I0505 21:57:41.318601       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-019621-m02"
	I0505 21:57:47.205846       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-019621-m02"
	I0505 21:58:23.868084       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.126072ms"
	I0505 21:58:23.868199       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.614µs"
	I0505 21:58:28.740282       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-8tzxc"
	I0505 21:58:28.765058       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-8tzxc"
	I0505 21:58:28.765449       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-j9cqt"
	I0505 21:58:28.792952       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-j9cqt"
	
	
	==> kube-controller-manager [e409273ba65ef4d25df83acbee19bccd69eb814951a8daf5f0bfee075503ac12] <==
	I0505 21:50:49.648862       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-019621-m02\" does not exist"
	I0505 21:50:49.677709       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-019621-m02" podCIDRs=["10.244.1.0/24"]
	I0505 21:50:50.989701       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-019621-m02"
	I0505 21:50:59.027834       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-019621-m02"
	I0505 21:51:01.511030       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="70.120069ms"
	I0505 21:51:01.529626       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.356541ms"
	I0505 21:51:01.529959       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="119.614µs"
	I0505 21:51:01.550449       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="132.061µs"
	I0505 21:51:04.989068       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.808959ms"
	I0505 21:51:04.989351       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="117.394µs"
	I0505 21:51:05.542172       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.078903ms"
	I0505 21:51:05.542325       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.995µs"
	I0505 21:51:38.143921       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-019621-m03\" does not exist"
	I0505 21:51:38.144670       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-019621-m02"
	I0505 21:51:38.295672       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-019621-m03" podCIDRs=["10.244.2.0/24"]
	I0505 21:51:41.010806       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-019621-m03"
	I0505 21:51:47.897798       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-019621-m02"
	I0505 21:52:19.384592       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-019621-m02"
	I0505 21:52:20.391564       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-019621-m02"
	I0505 21:52:20.391738       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-019621-m03\" does not exist"
	I0505 21:52:20.403283       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-019621-m03" podCIDRs=["10.244.3.0/24"]
	I0505 21:52:29.569246       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-019621-m02"
	I0505 21:53:11.060888       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-019621-m02"
	I0505 21:53:16.164841       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.716442ms"
	I0505 21:53:16.164986       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.511µs"
	
	
	==> kube-proxy [2014ff87bd1ebe119fdb57654420ef50b567e385a498d454505b7bd8e029c60b] <==
	I0505 21:50:12.572524       1 server_linux.go:69] "Using iptables proxy"
	I0505 21:50:12.581624       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.30"]
	I0505 21:50:12.675852       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0505 21:50:12.675941       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0505 21:50:12.675957       1 server_linux.go:165] "Using iptables Proxier"
	I0505 21:50:12.690521       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0505 21:50:12.690746       1 server.go:872] "Version info" version="v1.30.0"
	I0505 21:50:12.690759       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0505 21:50:12.692250       1 config.go:192] "Starting service config controller"
	I0505 21:50:12.692264       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0505 21:50:12.692333       1 config.go:101] "Starting endpoint slice config controller"
	I0505 21:50:12.692337       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0505 21:50:12.699810       1 config.go:319] "Starting node config controller"
	I0505 21:50:12.700654       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0505 21:50:12.793082       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0505 21:50:12.793115       1 shared_informer.go:320] Caches are synced for service config
	I0505 21:50:12.801124       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [88a7ed5f5366d4904c962d6a5989c8bafc8e05b1e16b34cde7a98a2134a8a5f6] <==
	I0505 21:56:17.112971       1 server_linux.go:69] "Using iptables proxy"
	I0505 21:56:17.128208       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.30"]
	I0505 21:56:17.202836       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0505 21:56:17.202901       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0505 21:56:17.202919       1 server_linux.go:165] "Using iptables Proxier"
	I0505 21:56:17.205780       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0505 21:56:17.205958       1 server.go:872] "Version info" version="v1.30.0"
	I0505 21:56:17.206000       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0505 21:56:17.207641       1 config.go:192] "Starting service config controller"
	I0505 21:56:17.207683       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0505 21:56:17.207723       1 config.go:101] "Starting endpoint slice config controller"
	I0505 21:56:17.207727       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0505 21:56:17.208033       1 config.go:319] "Starting node config controller"
	I0505 21:56:17.208077       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0505 21:56:17.308685       1 shared_informer.go:320] Caches are synced for node config
	I0505 21:56:17.308739       1 shared_informer.go:320] Caches are synced for service config
	I0505 21:56:17.308769       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [03073d2772bd2a7f14478d4361a75d75e3f0fbcfe076e864d01d053b846bea19] <==
	I0505 21:56:14.071462       1 serving.go:380] Generated self-signed cert in-memory
	W0505 21:56:16.003337       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0505 21:56:16.003537       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0505 21:56:16.003654       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0505 21:56:16.003688       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0505 21:56:16.039628       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0505 21:56:16.039694       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0505 21:56:16.043195       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0505 21:56:16.043962       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0505 21:56:16.044159       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0505 21:56:16.043988       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0505 21:56:16.144922       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [b1b5f166a5cf3df5211e9fab19839dd1f65f15d8402aa2dfe1f66c30164d4eed] <==
	W0505 21:49:55.810803       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0505 21:49:55.810812       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0505 21:49:55.810937       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0505 21:49:55.810982       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0505 21:49:55.811027       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0505 21:49:55.811037       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0505 21:49:55.811113       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0505 21:49:55.811150       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0505 21:49:55.811213       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0505 21:49:55.811251       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0505 21:49:55.811297       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0505 21:49:55.811308       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0505 21:49:56.719769       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0505 21:49:56.719800       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0505 21:49:56.747106       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0505 21:49:56.747176       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0505 21:49:57.007806       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0505 21:49:57.007864       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0505 21:49:57.008754       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0505 21:49:57.008811       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0505 21:49:57.051663       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0505 21:49:57.051749       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0505 21:49:58.893888       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0505 21:54:36.257148       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0505 21:54:36.257902       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	May 05 21:56:16 multinode-019621 kubelet[3067]: I0505 21:56:16.192958    3067 kubelet_node_status.go:76] "Successfully registered node" node="multinode-019621"
	May 05 21:56:16 multinode-019621 kubelet[3067]: I0505 21:56:16.195550    3067 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	May 05 21:56:16 multinode-019621 kubelet[3067]: I0505 21:56:16.197878    3067 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	May 05 21:56:16 multinode-019621 kubelet[3067]: I0505 21:56:16.216327    3067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/e2119b45-a792-4860-8906-6d4b422fa032-cni-cfg\") pod \"kindnet-kbqkb\" (UID: \"e2119b45-a792-4860-8906-6d4b422fa032\") " pod="kube-system/kindnet-kbqkb"
	May 05 21:56:16 multinode-019621 kubelet[3067]: I0505 21:56:16.216360    3067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e2119b45-a792-4860-8906-6d4b422fa032-xtables-lock\") pod \"kindnet-kbqkb\" (UID: \"e2119b45-a792-4860-8906-6d4b422fa032\") " pod="kube-system/kindnet-kbqkb"
	May 05 21:56:16 multinode-019621 kubelet[3067]: I0505 21:56:16.216483    3067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ec4a14d9-48ae-4f7b-9f1c-2d17443a9abe-tmp\") pod \"storage-provisioner\" (UID: \"ec4a14d9-48ae-4f7b-9f1c-2d17443a9abe\") " pod="kube-system/storage-provisioner"
	May 05 21:56:16 multinode-019621 kubelet[3067]: I0505 21:56:16.216512    3067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e2119b45-a792-4860-8906-6d4b422fa032-lib-modules\") pod \"kindnet-kbqkb\" (UID: \"e2119b45-a792-4860-8906-6d4b422fa032\") " pod="kube-system/kindnet-kbqkb"
	May 05 21:56:16 multinode-019621 kubelet[3067]: I0505 21:56:16.216562    3067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6cd95fb4-395f-40b0-ac69-985877734928-xtables-lock\") pod \"kube-proxy-cpdww\" (UID: \"6cd95fb4-395f-40b0-ac69-985877734928\") " pod="kube-system/kube-proxy-cpdww"
	May 05 21:56:16 multinode-019621 kubelet[3067]: I0505 21:56:16.216579    3067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6cd95fb4-395f-40b0-ac69-985877734928-lib-modules\") pod \"kube-proxy-cpdww\" (UID: \"6cd95fb4-395f-40b0-ac69-985877734928\") " pod="kube-system/kube-proxy-cpdww"
	May 05 21:56:23 multinode-019621 kubelet[3067]: I0505 21:56:23.703582    3067 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	May 05 21:57:12 multinode-019621 kubelet[3067]: E0505 21:57:12.291026    3067 iptables.go:577] "Could not set up iptables canary" err=<
	May 05 21:57:12 multinode-019621 kubelet[3067]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 05 21:57:12 multinode-019621 kubelet[3067]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 05 21:57:12 multinode-019621 kubelet[3067]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 05 21:57:12 multinode-019621 kubelet[3067]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 05 21:58:12 multinode-019621 kubelet[3067]: E0505 21:58:12.298810    3067 iptables.go:577] "Could not set up iptables canary" err=<
	May 05 21:58:12 multinode-019621 kubelet[3067]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 05 21:58:12 multinode-019621 kubelet[3067]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 05 21:58:12 multinode-019621 kubelet[3067]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 05 21:58:12 multinode-019621 kubelet[3067]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 05 21:59:12 multinode-019621 kubelet[3067]: E0505 21:59:12.291591    3067 iptables.go:577] "Could not set up iptables canary" err=<
	May 05 21:59:12 multinode-019621 kubelet[3067]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 05 21:59:12 multinode-019621 kubelet[3067]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 05 21:59:12 multinode-019621 kubelet[3067]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 05 21:59:12 multinode-019621 kubelet[3067]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0505 22:00:08.144928   50666 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18602-11466/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-019621 -n multinode-019621
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-019621 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.59s)

                                                
                                    
x
+
TestPreload (265.81s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-006416 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0505 22:04:31.830203   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/functional-273789/client.crt: no such file or directory
E0505 22:06:51.947808   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-006416 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m57.512802223s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-006416 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-006416 image pull gcr.io/k8s-minikube/busybox: (3.008588324s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-006416
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-006416: (7.307920306s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-006416 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-006416 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m14.792142341s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-006416 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:626: *** TestPreload FAILED at 2024-05-05 22:08:44.464192129 +0000 UTC m=+4289.836936257
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-006416 -n test-preload-006416
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-006416 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-006416 logs -n 25: (1.164996037s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-019621 ssh -n                                                                 | multinode-019621     | jenkins | v1.33.0 | 05 May 24 21:51 UTC | 05 May 24 21:51 UTC |
	|         | multinode-019621-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-019621 ssh -n multinode-019621 sudo cat                                       | multinode-019621     | jenkins | v1.33.0 | 05 May 24 21:51 UTC | 05 May 24 21:51 UTC |
	|         | /home/docker/cp-test_multinode-019621-m03_multinode-019621.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-019621 cp multinode-019621-m03:/home/docker/cp-test.txt                       | multinode-019621     | jenkins | v1.33.0 | 05 May 24 21:51 UTC | 05 May 24 21:51 UTC |
	|         | multinode-019621-m02:/home/docker/cp-test_multinode-019621-m03_multinode-019621-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-019621 ssh -n                                                                 | multinode-019621     | jenkins | v1.33.0 | 05 May 24 21:51 UTC | 05 May 24 21:51 UTC |
	|         | multinode-019621-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-019621 ssh -n multinode-019621-m02 sudo cat                                   | multinode-019621     | jenkins | v1.33.0 | 05 May 24 21:51 UTC | 05 May 24 21:51 UTC |
	|         | /home/docker/cp-test_multinode-019621-m03_multinode-019621-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-019621 node stop m03                                                          | multinode-019621     | jenkins | v1.33.0 | 05 May 24 21:51 UTC | 05 May 24 21:52 UTC |
	| node    | multinode-019621 node start                                                             | multinode-019621     | jenkins | v1.33.0 | 05 May 24 21:52 UTC | 05 May 24 21:52 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-019621                                                                | multinode-019621     | jenkins | v1.33.0 | 05 May 24 21:52 UTC |                     |
	| stop    | -p multinode-019621                                                                     | multinode-019621     | jenkins | v1.33.0 | 05 May 24 21:52 UTC |                     |
	| start   | -p multinode-019621                                                                     | multinode-019621     | jenkins | v1.33.0 | 05 May 24 21:54 UTC | 05 May 24 21:57 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-019621                                                                | multinode-019621     | jenkins | v1.33.0 | 05 May 24 21:57 UTC |                     |
	| node    | multinode-019621 node delete                                                            | multinode-019621     | jenkins | v1.33.0 | 05 May 24 21:57 UTC | 05 May 24 21:57 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-019621 stop                                                                   | multinode-019621     | jenkins | v1.33.0 | 05 May 24 21:57 UTC |                     |
	| start   | -p multinode-019621                                                                     | multinode-019621     | jenkins | v1.33.0 | 05 May 24 22:00 UTC | 05 May 24 22:03 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-019621                                                                | multinode-019621     | jenkins | v1.33.0 | 05 May 24 22:03 UTC |                     |
	| start   | -p multinode-019621-m02                                                                 | multinode-019621-m02 | jenkins | v1.33.0 | 05 May 24 22:03 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-019621-m03                                                                 | multinode-019621-m03 | jenkins | v1.33.0 | 05 May 24 22:03 UTC | 05 May 24 22:04 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-019621                                                                 | multinode-019621     | jenkins | v1.33.0 | 05 May 24 22:04 UTC |                     |
	| delete  | -p multinode-019621-m03                                                                 | multinode-019621-m03 | jenkins | v1.33.0 | 05 May 24 22:04 UTC | 05 May 24 22:04 UTC |
	| delete  | -p multinode-019621                                                                     | multinode-019621     | jenkins | v1.33.0 | 05 May 24 22:04 UTC | 05 May 24 22:04 UTC |
	| start   | -p test-preload-006416                                                                  | test-preload-006416  | jenkins | v1.33.0 | 05 May 24 22:04 UTC | 05 May 24 22:07 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-006416 image pull                                                          | test-preload-006416  | jenkins | v1.33.0 | 05 May 24 22:07 UTC | 05 May 24 22:07 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-006416                                                                  | test-preload-006416  | jenkins | v1.33.0 | 05 May 24 22:07 UTC | 05 May 24 22:07 UTC |
	| start   | -p test-preload-006416                                                                  | test-preload-006416  | jenkins | v1.33.0 | 05 May 24 22:07 UTC | 05 May 24 22:08 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-006416 image list                                                          | test-preload-006416  | jenkins | v1.33.0 | 05 May 24 22:08 UTC | 05 May 24 22:08 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/05 22:07:29
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0505 22:07:29.491930   53407 out.go:291] Setting OutFile to fd 1 ...
	I0505 22:07:29.492037   53407 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 22:07:29.492045   53407 out.go:304] Setting ErrFile to fd 2...
	I0505 22:07:29.492049   53407 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 22:07:29.492258   53407 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18602-11466/.minikube/bin
	I0505 22:07:29.492805   53407 out.go:298] Setting JSON to false
	I0505 22:07:29.493687   53407 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6596,"bootTime":1714940253,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0505 22:07:29.493751   53407 start.go:139] virtualization: kvm guest
	I0505 22:07:29.496094   53407 out.go:177] * [test-preload-006416] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0505 22:07:29.497610   53407 notify.go:220] Checking for updates...
	I0505 22:07:29.497627   53407 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 22:07:29.499089   53407 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 22:07:29.500396   53407 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18602-11466/kubeconfig
	I0505 22:07:29.501932   53407 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18602-11466/.minikube
	I0505 22:07:29.503436   53407 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0505 22:07:29.504891   53407 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 22:07:29.506671   53407 config.go:182] Loaded profile config "test-preload-006416": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0505 22:07:29.507031   53407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 22:07:29.507086   53407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 22:07:29.521638   53407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33145
	I0505 22:07:29.522003   53407 main.go:141] libmachine: () Calling .GetVersion
	I0505 22:07:29.522534   53407 main.go:141] libmachine: Using API Version  1
	I0505 22:07:29.522561   53407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 22:07:29.522858   53407 main.go:141] libmachine: () Calling .GetMachineName
	I0505 22:07:29.523048   53407 main.go:141] libmachine: (test-preload-006416) Calling .DriverName
	I0505 22:07:29.524979   53407 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0505 22:07:29.526310   53407 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 22:07:29.526596   53407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 22:07:29.526629   53407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 22:07:29.541155   53407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42479
	I0505 22:07:29.541594   53407 main.go:141] libmachine: () Calling .GetVersion
	I0505 22:07:29.542032   53407 main.go:141] libmachine: Using API Version  1
	I0505 22:07:29.542051   53407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 22:07:29.542360   53407 main.go:141] libmachine: () Calling .GetMachineName
	I0505 22:07:29.542560   53407 main.go:141] libmachine: (test-preload-006416) Calling .DriverName
	I0505 22:07:29.578865   53407 out.go:177] * Using the kvm2 driver based on existing profile
	I0505 22:07:29.580109   53407 start.go:297] selected driver: kvm2
	I0505 22:07:29.580131   53407 start.go:901] validating driver "kvm2" against &{Name:test-preload-006416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-006416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.118 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 22:07:29.580262   53407 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 22:07:29.580973   53407 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 22:07:29.581097   53407 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18602-11466/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0505 22:07:29.596435   53407 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0505 22:07:29.596804   53407 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0505 22:07:29.596876   53407 cni.go:84] Creating CNI manager for ""
	I0505 22:07:29.596891   53407 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0505 22:07:29.596953   53407 start.go:340] cluster config:
	{Name:test-preload-006416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-006416 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.118 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 22:07:29.597073   53407 iso.go:125] acquiring lock: {Name:mk05a37c5afd8d748706c016c1665e3846f7161e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 22:07:29.598821   53407 out.go:177] * Starting "test-preload-006416" primary control-plane node in "test-preload-006416" cluster
	I0505 22:07:29.600015   53407 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0505 22:07:30.061491   53407 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0505 22:07:30.061531   53407 cache.go:56] Caching tarball of preloaded images
	I0505 22:07:30.061674   53407 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0505 22:07:30.063724   53407 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0505 22:07:30.065008   53407 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0505 22:07:30.174982   53407 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/18602-11466/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0505 22:07:42.325185   53407 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0505 22:07:42.325279   53407 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18602-11466/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0505 22:07:43.165030   53407 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0505 22:07:43.165147   53407 profile.go:143] Saving config to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/test-preload-006416/config.json ...
	I0505 22:07:43.165370   53407 start.go:360] acquireMachinesLock for test-preload-006416: {Name:mk7f653ea73351d572a4896c6b37d2e7aa44b4ac Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 22:07:43.165431   53407 start.go:364] duration metric: took 41.027µs to acquireMachinesLock for "test-preload-006416"
	I0505 22:07:43.165450   53407 start.go:96] Skipping create...Using existing machine configuration
	I0505 22:07:43.165461   53407 fix.go:54] fixHost starting: 
	I0505 22:07:43.165760   53407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 22:07:43.165799   53407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 22:07:43.179962   53407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35139
	I0505 22:07:43.180370   53407 main.go:141] libmachine: () Calling .GetVersion
	I0505 22:07:43.180843   53407 main.go:141] libmachine: Using API Version  1
	I0505 22:07:43.180861   53407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 22:07:43.181158   53407 main.go:141] libmachine: () Calling .GetMachineName
	I0505 22:07:43.181374   53407 main.go:141] libmachine: (test-preload-006416) Calling .DriverName
	I0505 22:07:43.181553   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetState
	I0505 22:07:43.183062   53407 fix.go:112] recreateIfNeeded on test-preload-006416: state=Stopped err=<nil>
	I0505 22:07:43.183083   53407 main.go:141] libmachine: (test-preload-006416) Calling .DriverName
	W0505 22:07:43.183228   53407 fix.go:138] unexpected machine state, will restart: <nil>
	I0505 22:07:43.185494   53407 out.go:177] * Restarting existing kvm2 VM for "test-preload-006416" ...
	I0505 22:07:43.187009   53407 main.go:141] libmachine: (test-preload-006416) Calling .Start
	I0505 22:07:43.187200   53407 main.go:141] libmachine: (test-preload-006416) Ensuring networks are active...
	I0505 22:07:43.187971   53407 main.go:141] libmachine: (test-preload-006416) Ensuring network default is active
	I0505 22:07:43.188274   53407 main.go:141] libmachine: (test-preload-006416) Ensuring network mk-test-preload-006416 is active
	I0505 22:07:43.188614   53407 main.go:141] libmachine: (test-preload-006416) Getting domain xml...
	I0505 22:07:43.189414   53407 main.go:141] libmachine: (test-preload-006416) Creating domain...
	I0505 22:07:44.378572   53407 main.go:141] libmachine: (test-preload-006416) Waiting to get IP...
	I0505 22:07:44.379568   53407 main.go:141] libmachine: (test-preload-006416) DBG | domain test-preload-006416 has defined MAC address 52:54:00:e2:0a:4a in network mk-test-preload-006416
	I0505 22:07:44.379984   53407 main.go:141] libmachine: (test-preload-006416) DBG | unable to find current IP address of domain test-preload-006416 in network mk-test-preload-006416
	I0505 22:07:44.380058   53407 main.go:141] libmachine: (test-preload-006416) DBG | I0505 22:07:44.379966   53474 retry.go:31] will retry after 205.580614ms: waiting for machine to come up
	I0505 22:07:44.587641   53407 main.go:141] libmachine: (test-preload-006416) DBG | domain test-preload-006416 has defined MAC address 52:54:00:e2:0a:4a in network mk-test-preload-006416
	I0505 22:07:44.588227   53407 main.go:141] libmachine: (test-preload-006416) DBG | unable to find current IP address of domain test-preload-006416 in network mk-test-preload-006416
	I0505 22:07:44.588249   53407 main.go:141] libmachine: (test-preload-006416) DBG | I0505 22:07:44.588184   53474 retry.go:31] will retry after 337.341508ms: waiting for machine to come up
	I0505 22:07:44.926659   53407 main.go:141] libmachine: (test-preload-006416) DBG | domain test-preload-006416 has defined MAC address 52:54:00:e2:0a:4a in network mk-test-preload-006416
	I0505 22:07:44.927059   53407 main.go:141] libmachine: (test-preload-006416) DBG | unable to find current IP address of domain test-preload-006416 in network mk-test-preload-006416
	I0505 22:07:44.927100   53407 main.go:141] libmachine: (test-preload-006416) DBG | I0505 22:07:44.927043   53474 retry.go:31] will retry after 326.889092ms: waiting for machine to come up
	I0505 22:07:45.255821   53407 main.go:141] libmachine: (test-preload-006416) DBG | domain test-preload-006416 has defined MAC address 52:54:00:e2:0a:4a in network mk-test-preload-006416
	I0505 22:07:45.256194   53407 main.go:141] libmachine: (test-preload-006416) DBG | unable to find current IP address of domain test-preload-006416 in network mk-test-preload-006416
	I0505 22:07:45.256223   53407 main.go:141] libmachine: (test-preload-006416) DBG | I0505 22:07:45.256143   53474 retry.go:31] will retry after 514.439685ms: waiting for machine to come up
	I0505 22:07:45.771900   53407 main.go:141] libmachine: (test-preload-006416) DBG | domain test-preload-006416 has defined MAC address 52:54:00:e2:0a:4a in network mk-test-preload-006416
	I0505 22:07:45.772302   53407 main.go:141] libmachine: (test-preload-006416) DBG | unable to find current IP address of domain test-preload-006416 in network mk-test-preload-006416
	I0505 22:07:45.772329   53407 main.go:141] libmachine: (test-preload-006416) DBG | I0505 22:07:45.772260   53474 retry.go:31] will retry after 744.031867ms: waiting for machine to come up
	I0505 22:07:46.517632   53407 main.go:141] libmachine: (test-preload-006416) DBG | domain test-preload-006416 has defined MAC address 52:54:00:e2:0a:4a in network mk-test-preload-006416
	I0505 22:07:46.517973   53407 main.go:141] libmachine: (test-preload-006416) DBG | unable to find current IP address of domain test-preload-006416 in network mk-test-preload-006416
	I0505 22:07:46.517999   53407 main.go:141] libmachine: (test-preload-006416) DBG | I0505 22:07:46.517929   53474 retry.go:31] will retry after 919.219518ms: waiting for machine to come up
	I0505 22:07:47.439004   53407 main.go:141] libmachine: (test-preload-006416) DBG | domain test-preload-006416 has defined MAC address 52:54:00:e2:0a:4a in network mk-test-preload-006416
	I0505 22:07:47.439371   53407 main.go:141] libmachine: (test-preload-006416) DBG | unable to find current IP address of domain test-preload-006416 in network mk-test-preload-006416
	I0505 22:07:47.439412   53407 main.go:141] libmachine: (test-preload-006416) DBG | I0505 22:07:47.439328   53474 retry.go:31] will retry after 902.200386ms: waiting for machine to come up
	I0505 22:07:48.342833   53407 main.go:141] libmachine: (test-preload-006416) DBG | domain test-preload-006416 has defined MAC address 52:54:00:e2:0a:4a in network mk-test-preload-006416
	I0505 22:07:48.343156   53407 main.go:141] libmachine: (test-preload-006416) DBG | unable to find current IP address of domain test-preload-006416 in network mk-test-preload-006416
	I0505 22:07:48.343172   53407 main.go:141] libmachine: (test-preload-006416) DBG | I0505 22:07:48.343138   53474 retry.go:31] will retry after 995.0307ms: waiting for machine to come up
	I0505 22:07:49.340336   53407 main.go:141] libmachine: (test-preload-006416) DBG | domain test-preload-006416 has defined MAC address 52:54:00:e2:0a:4a in network mk-test-preload-006416
	I0505 22:07:49.340775   53407 main.go:141] libmachine: (test-preload-006416) DBG | unable to find current IP address of domain test-preload-006416 in network mk-test-preload-006416
	I0505 22:07:49.340802   53407 main.go:141] libmachine: (test-preload-006416) DBG | I0505 22:07:49.340722   53474 retry.go:31] will retry after 1.392360114s: waiting for machine to come up
	I0505 22:07:50.735282   53407 main.go:141] libmachine: (test-preload-006416) DBG | domain test-preload-006416 has defined MAC address 52:54:00:e2:0a:4a in network mk-test-preload-006416
	I0505 22:07:50.735877   53407 main.go:141] libmachine: (test-preload-006416) DBG | unable to find current IP address of domain test-preload-006416 in network mk-test-preload-006416
	I0505 22:07:50.735900   53407 main.go:141] libmachine: (test-preload-006416) DBG | I0505 22:07:50.735837   53474 retry.go:31] will retry after 1.703962372s: waiting for machine to come up
	I0505 22:07:52.441002   53407 main.go:141] libmachine: (test-preload-006416) DBG | domain test-preload-006416 has defined MAC address 52:54:00:e2:0a:4a in network mk-test-preload-006416
	I0505 22:07:52.441445   53407 main.go:141] libmachine: (test-preload-006416) DBG | unable to find current IP address of domain test-preload-006416 in network mk-test-preload-006416
	I0505 22:07:52.441478   53407 main.go:141] libmachine: (test-preload-006416) DBG | I0505 22:07:52.441395   53474 retry.go:31] will retry after 2.338376289s: waiting for machine to come up
	I0505 22:07:54.782311   53407 main.go:141] libmachine: (test-preload-006416) DBG | domain test-preload-006416 has defined MAC address 52:54:00:e2:0a:4a in network mk-test-preload-006416
	I0505 22:07:54.782720   53407 main.go:141] libmachine: (test-preload-006416) DBG | unable to find current IP address of domain test-preload-006416 in network mk-test-preload-006416
	I0505 22:07:54.782753   53407 main.go:141] libmachine: (test-preload-006416) DBG | I0505 22:07:54.782673   53474 retry.go:31] will retry after 2.99541088s: waiting for machine to come up
	I0505 22:07:57.781816   53407 main.go:141] libmachine: (test-preload-006416) DBG | domain test-preload-006416 has defined MAC address 52:54:00:e2:0a:4a in network mk-test-preload-006416
	I0505 22:07:57.782169   53407 main.go:141] libmachine: (test-preload-006416) DBG | unable to find current IP address of domain test-preload-006416 in network mk-test-preload-006416
	I0505 22:07:57.782196   53407 main.go:141] libmachine: (test-preload-006416) DBG | I0505 22:07:57.782145   53474 retry.go:31] will retry after 2.746494104s: waiting for machine to come up
	I0505 22:08:00.531604   53407 main.go:141] libmachine: (test-preload-006416) DBG | domain test-preload-006416 has defined MAC address 52:54:00:e2:0a:4a in network mk-test-preload-006416
	I0505 22:08:00.532045   53407 main.go:141] libmachine: (test-preload-006416) DBG | unable to find current IP address of domain test-preload-006416 in network mk-test-preload-006416
	I0505 22:08:00.532064   53407 main.go:141] libmachine: (test-preload-006416) DBG | I0505 22:08:00.532022   53474 retry.go:31] will retry after 3.967119489s: waiting for machine to come up
	I0505 22:08:04.501042   53407 main.go:141] libmachine: (test-preload-006416) DBG | domain test-preload-006416 has defined MAC address 52:54:00:e2:0a:4a in network mk-test-preload-006416
	I0505 22:08:04.501434   53407 main.go:141] libmachine: (test-preload-006416) Found IP for machine: 192.168.39.118
	I0505 22:08:04.501456   53407 main.go:141] libmachine: (test-preload-006416) DBG | domain test-preload-006416 has current primary IP address 192.168.39.118 and MAC address 52:54:00:e2:0a:4a in network mk-test-preload-006416
	I0505 22:08:04.501465   53407 main.go:141] libmachine: (test-preload-006416) Reserving static IP address...
	I0505 22:08:04.501939   53407 main.go:141] libmachine: (test-preload-006416) DBG | found host DHCP lease matching {name: "test-preload-006416", mac: "52:54:00:e2:0a:4a", ip: "192.168.39.118"} in network mk-test-preload-006416: {Iface:virbr1 ExpiryTime:2024-05-05 23:07:55 +0000 UTC Type:0 Mac:52:54:00:e2:0a:4a Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:test-preload-006416 Clientid:01:52:54:00:e2:0a:4a}
	I0505 22:08:04.501970   53407 main.go:141] libmachine: (test-preload-006416) DBG | skip adding static IP to network mk-test-preload-006416 - found existing host DHCP lease matching {name: "test-preload-006416", mac: "52:54:00:e2:0a:4a", ip: "192.168.39.118"}
	I0505 22:08:04.501986   53407 main.go:141] libmachine: (test-preload-006416) Reserved static IP address: 192.168.39.118
	I0505 22:08:04.502008   53407 main.go:141] libmachine: (test-preload-006416) Waiting for SSH to be available...
	I0505 22:08:04.502043   53407 main.go:141] libmachine: (test-preload-006416) DBG | Getting to WaitForSSH function...
	I0505 22:08:04.504140   53407 main.go:141] libmachine: (test-preload-006416) DBG | domain test-preload-006416 has defined MAC address 52:54:00:e2:0a:4a in network mk-test-preload-006416
	I0505 22:08:04.504443   53407 main.go:141] libmachine: (test-preload-006416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:0a:4a", ip: ""} in network mk-test-preload-006416: {Iface:virbr1 ExpiryTime:2024-05-05 23:07:55 +0000 UTC Type:0 Mac:52:54:00:e2:0a:4a Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:test-preload-006416 Clientid:01:52:54:00:e2:0a:4a}
	I0505 22:08:04.504475   53407 main.go:141] libmachine: (test-preload-006416) DBG | domain test-preload-006416 has defined IP address 192.168.39.118 and MAC address 52:54:00:e2:0a:4a in network mk-test-preload-006416
	I0505 22:08:04.504583   53407 main.go:141] libmachine: (test-preload-006416) DBG | Using SSH client type: external
	I0505 22:08:04.504622   53407 main.go:141] libmachine: (test-preload-006416) DBG | Using SSH private key: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/test-preload-006416/id_rsa (-rw-------)
	I0505 22:08:04.504680   53407 main.go:141] libmachine: (test-preload-006416) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.118 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18602-11466/.minikube/machines/test-preload-006416/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0505 22:08:04.504702   53407 main.go:141] libmachine: (test-preload-006416) DBG | About to run SSH command:
	I0505 22:08:04.504721   53407 main.go:141] libmachine: (test-preload-006416) DBG | exit 0
	I0505 22:08:04.627581   53407 main.go:141] libmachine: (test-preload-006416) DBG | SSH cmd err, output: <nil>: 
	I0505 22:08:04.627921   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetConfigRaw
	I0505 22:08:04.628579   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetIP
	I0505 22:08:04.631052   53407 main.go:141] libmachine: (test-preload-006416) DBG | domain test-preload-006416 has defined MAC address 52:54:00:e2:0a:4a in network mk-test-preload-006416
	I0505 22:08:04.631401   53407 main.go:141] libmachine: (test-preload-006416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:0a:4a", ip: ""} in network mk-test-preload-006416: {Iface:virbr1 ExpiryTime:2024-05-05 23:07:55 +0000 UTC Type:0 Mac:52:54:00:e2:0a:4a Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:test-preload-006416 Clientid:01:52:54:00:e2:0a:4a}
	I0505 22:08:04.631434   53407 main.go:141] libmachine: (test-preload-006416) DBG | domain test-preload-006416 has defined IP address 192.168.39.118 and MAC address 52:54:00:e2:0a:4a in network mk-test-preload-006416
	I0505 22:08:04.631645   53407 profile.go:143] Saving config to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/test-preload-006416/config.json ...
	I0505 22:08:04.631832   53407 machine.go:94] provisionDockerMachine start ...
	I0505 22:08:04.631852   53407 main.go:141] libmachine: (test-preload-006416) Calling .DriverName
	I0505 22:08:04.632084   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetSSHHostname
	I0505 22:08:04.634020   53407 main.go:141] libmachine: (test-preload-006416) DBG | domain test-preload-006416 has defined MAC address 52:54:00:e2:0a:4a in network mk-test-preload-006416
	I0505 22:08:04.634299   53407 main.go:141] libmachine: (test-preload-006416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:0a:4a", ip: ""} in network mk-test-preload-006416: {Iface:virbr1 ExpiryTime:2024-05-05 23:07:55 +0000 UTC Type:0 Mac:52:54:00:e2:0a:4a Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:test-preload-006416 Clientid:01:52:54:00:e2:0a:4a}
	I0505 22:08:04.634325   53407 main.go:141] libmachine: (test-preload-006416) DBG | domain test-preload-006416 has defined IP address 192.168.39.118 and MAC address 52:54:00:e2:0a:4a in network mk-test-preload-006416
	I0505 22:08:04.634435   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetSSHPort
	I0505 22:08:04.634588   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetSSHKeyPath
	I0505 22:08:04.634758   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetSSHKeyPath
	I0505 22:08:04.634938   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetSSHUsername
	I0505 22:08:04.635142   53407 main.go:141] libmachine: Using SSH client type: native
	I0505 22:08:04.635345   53407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0505 22:08:04.635359   53407 main.go:141] libmachine: About to run SSH command:
	hostname
	I0505 22:08:04.736499   53407 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0505 22:08:04.736531   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetMachineName
	I0505 22:08:04.736778   53407 buildroot.go:166] provisioning hostname "test-preload-006416"
	I0505 22:08:04.736816   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetMachineName
	I0505 22:08:04.736993   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetSSHHostname
	I0505 22:08:04.739537   53407 main.go:141] libmachine: (test-preload-006416) DBG | domain test-preload-006416 has defined MAC address 52:54:00:e2:0a:4a in network mk-test-preload-006416
	I0505 22:08:04.739898   53407 main.go:141] libmachine: (test-preload-006416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:0a:4a", ip: ""} in network mk-test-preload-006416: {Iface:virbr1 ExpiryTime:2024-05-05 23:07:55 +0000 UTC Type:0 Mac:52:54:00:e2:0a:4a Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:test-preload-006416 Clientid:01:52:54:00:e2:0a:4a}
	I0505 22:08:04.739923   53407 main.go:141] libmachine: (test-preload-006416) DBG | domain test-preload-006416 has defined IP address 192.168.39.118 and MAC address 52:54:00:e2:0a:4a in network mk-test-preload-006416
	I0505 22:08:04.740090   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetSSHPort
	I0505 22:08:04.740280   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetSSHKeyPath
	I0505 22:08:04.740464   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetSSHKeyPath
	I0505 22:08:04.740601   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetSSHUsername
	I0505 22:08:04.740743   53407 main.go:141] libmachine: Using SSH client type: native
	I0505 22:08:04.740944   53407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0505 22:08:04.740962   53407 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-006416 && echo "test-preload-006416" | sudo tee /etc/hostname
	I0505 22:08:04.857830   53407 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-006416
	
	I0505 22:08:04.857863   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetSSHHostname
	I0505 22:08:04.860585   53407 main.go:141] libmachine: (test-preload-006416) DBG | domain test-preload-006416 has defined MAC address 52:54:00:e2:0a:4a in network mk-test-preload-006416
	I0505 22:08:04.860956   53407 main.go:141] libmachine: (test-preload-006416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:0a:4a", ip: ""} in network mk-test-preload-006416: {Iface:virbr1 ExpiryTime:2024-05-05 23:07:55 +0000 UTC Type:0 Mac:52:54:00:e2:0a:4a Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:test-preload-006416 Clientid:01:52:54:00:e2:0a:4a}
	I0505 22:08:04.860981   53407 main.go:141] libmachine: (test-preload-006416) DBG | domain test-preload-006416 has defined IP address 192.168.39.118 and MAC address 52:54:00:e2:0a:4a in network mk-test-preload-006416
	I0505 22:08:04.861182   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetSSHPort
	I0505 22:08:04.861366   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetSSHKeyPath
	I0505 22:08:04.861521   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetSSHKeyPath
	I0505 22:08:04.861645   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetSSHUsername
	I0505 22:08:04.861845   53407 main.go:141] libmachine: Using SSH client type: native
	I0505 22:08:04.861998   53407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0505 22:08:04.862015   53407 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-006416' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-006416/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-006416' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0505 22:08:04.974223   53407 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0505 22:08:04.974259   53407 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18602-11466/.minikube CaCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18602-11466/.minikube}
	I0505 22:08:04.974284   53407 buildroot.go:174] setting up certificates
	I0505 22:08:04.974297   53407 provision.go:84] configureAuth start
	I0505 22:08:04.974307   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetMachineName
	I0505 22:08:04.974582   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetIP
	I0505 22:08:04.977051   53407 main.go:141] libmachine: (test-preload-006416) DBG | domain test-preload-006416 has defined MAC address 52:54:00:e2:0a:4a in network mk-test-preload-006416
	I0505 22:08:04.977381   53407 main.go:141] libmachine: (test-preload-006416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:0a:4a", ip: ""} in network mk-test-preload-006416: {Iface:virbr1 ExpiryTime:2024-05-05 23:07:55 +0000 UTC Type:0 Mac:52:54:00:e2:0a:4a Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:test-preload-006416 Clientid:01:52:54:00:e2:0a:4a}
	I0505 22:08:04.977422   53407 main.go:141] libmachine: (test-preload-006416) DBG | domain test-preload-006416 has defined IP address 192.168.39.118 and MAC address 52:54:00:e2:0a:4a in network mk-test-preload-006416
	I0505 22:08:04.977570   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetSSHHostname
	I0505 22:08:04.979772   53407 main.go:141] libmachine: (test-preload-006416) DBG | domain test-preload-006416 has defined MAC address 52:54:00:e2:0a:4a in network mk-test-preload-006416
	I0505 22:08:04.980095   53407 main.go:141] libmachine: (test-preload-006416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:0a:4a", ip: ""} in network mk-test-preload-006416: {Iface:virbr1 ExpiryTime:2024-05-05 23:07:55 +0000 UTC Type:0 Mac:52:54:00:e2:0a:4a Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:test-preload-006416 Clientid:01:52:54:00:e2:0a:4a}
	I0505 22:08:04.980120   53407 main.go:141] libmachine: (test-preload-006416) DBG | domain test-preload-006416 has defined IP address 192.168.39.118 and MAC address 52:54:00:e2:0a:4a in network mk-test-preload-006416
	I0505 22:08:04.980275   53407 provision.go:143] copyHostCerts
	I0505 22:08:04.980327   53407 exec_runner.go:144] found /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem, removing ...
	I0505 22:08:04.980343   53407 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem
	I0505 22:08:04.980405   53407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem (1078 bytes)
	I0505 22:08:04.980486   53407 exec_runner.go:144] found /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem, removing ...
	I0505 22:08:04.980494   53407 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem
	I0505 22:08:04.980518   53407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem (1123 bytes)
	I0505 22:08:04.980568   53407 exec_runner.go:144] found /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem, removing ...
	I0505 22:08:04.980575   53407 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem
	I0505 22:08:04.980594   53407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem (1675 bytes)
	I0505 22:08:04.980679   53407 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem org=jenkins.test-preload-006416 san=[127.0.0.1 192.168.39.118 localhost minikube test-preload-006416]
	I0505 22:08:05.166737   53407 provision.go:177] copyRemoteCerts
	I0505 22:08:05.166797   53407 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0505 22:08:05.166818   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetSSHHostname
	I0505 22:08:05.169260   53407 main.go:141] libmachine: (test-preload-006416) DBG | domain test-preload-006416 has defined MAC address 52:54:00:e2:0a:4a in network mk-test-preload-006416
	I0505 22:08:05.169549   53407 main.go:141] libmachine: (test-preload-006416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:0a:4a", ip: ""} in network mk-test-preload-006416: {Iface:virbr1 ExpiryTime:2024-05-05 23:07:55 +0000 UTC Type:0 Mac:52:54:00:e2:0a:4a Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:test-preload-006416 Clientid:01:52:54:00:e2:0a:4a}
	I0505 22:08:05.169583   53407 main.go:141] libmachine: (test-preload-006416) DBG | domain test-preload-006416 has defined IP address 192.168.39.118 and MAC address 52:54:00:e2:0a:4a in network mk-test-preload-006416
	I0505 22:08:05.169754   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetSSHPort
	I0505 22:08:05.169962   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetSSHKeyPath
	I0505 22:08:05.170138   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetSSHUsername
	I0505 22:08:05.170288   53407 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/test-preload-006416/id_rsa Username:docker}
	I0505 22:08:05.251772   53407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0505 22:08:05.278965   53407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0505 22:08:05.307077   53407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0505 22:08:05.334880   53407 provision.go:87] duration metric: took 360.569774ms to configureAuth
	I0505 22:08:05.334915   53407 buildroot.go:189] setting minikube options for container-runtime
	I0505 22:08:05.335161   53407 config.go:182] Loaded profile config "test-preload-006416": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0505 22:08:05.335234   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetSSHHostname
	I0505 22:08:05.338097   53407 main.go:141] libmachine: (test-preload-006416) DBG | domain test-preload-006416 has defined MAC address 52:54:00:e2:0a:4a in network mk-test-preload-006416
	I0505 22:08:05.338491   53407 main.go:141] libmachine: (test-preload-006416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:0a:4a", ip: ""} in network mk-test-preload-006416: {Iface:virbr1 ExpiryTime:2024-05-05 23:07:55 +0000 UTC Type:0 Mac:52:54:00:e2:0a:4a Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:test-preload-006416 Clientid:01:52:54:00:e2:0a:4a}
	I0505 22:08:05.338521   53407 main.go:141] libmachine: (test-preload-006416) DBG | domain test-preload-006416 has defined IP address 192.168.39.118 and MAC address 52:54:00:e2:0a:4a in network mk-test-preload-006416
	I0505 22:08:05.338666   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetSSHPort
	I0505 22:08:05.338852   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetSSHKeyPath
	I0505 22:08:05.339018   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetSSHKeyPath
	I0505 22:08:05.339131   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetSSHUsername
	I0505 22:08:05.339270   53407 main.go:141] libmachine: Using SSH client type: native
	I0505 22:08:05.339449   53407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0505 22:08:05.339473   53407 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0505 22:08:05.635157   53407 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0505 22:08:05.635189   53407 machine.go:97] duration metric: took 1.003340427s to provisionDockerMachine
	I0505 22:08:05.635202   53407 start.go:293] postStartSetup for "test-preload-006416" (driver="kvm2")
	I0505 22:08:05.635212   53407 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0505 22:08:05.635236   53407 main.go:141] libmachine: (test-preload-006416) Calling .DriverName
	I0505 22:08:05.635553   53407 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0505 22:08:05.635604   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetSSHHostname
	I0505 22:08:05.638200   53407 main.go:141] libmachine: (test-preload-006416) DBG | domain test-preload-006416 has defined MAC address 52:54:00:e2:0a:4a in network mk-test-preload-006416
	I0505 22:08:05.638507   53407 main.go:141] libmachine: (test-preload-006416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:0a:4a", ip: ""} in network mk-test-preload-006416: {Iface:virbr1 ExpiryTime:2024-05-05 23:07:55 +0000 UTC Type:0 Mac:52:54:00:e2:0a:4a Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:test-preload-006416 Clientid:01:52:54:00:e2:0a:4a}
	I0505 22:08:05.638534   53407 main.go:141] libmachine: (test-preload-006416) DBG | domain test-preload-006416 has defined IP address 192.168.39.118 and MAC address 52:54:00:e2:0a:4a in network mk-test-preload-006416
	I0505 22:08:05.638719   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetSSHPort
	I0505 22:08:05.638893   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetSSHKeyPath
	I0505 22:08:05.639066   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetSSHUsername
	I0505 22:08:05.639210   53407 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/test-preload-006416/id_rsa Username:docker}
	I0505 22:08:05.719591   53407 ssh_runner.go:195] Run: cat /etc/os-release
	I0505 22:08:05.724526   53407 info.go:137] Remote host: Buildroot 2023.02.9
	I0505 22:08:05.724548   53407 filesync.go:126] Scanning /home/jenkins/minikube-integration/18602-11466/.minikube/addons for local assets ...
	I0505 22:08:05.724604   53407 filesync.go:126] Scanning /home/jenkins/minikube-integration/18602-11466/.minikube/files for local assets ...
	I0505 22:08:05.724686   53407 filesync.go:149] local asset: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem -> 187982.pem in /etc/ssl/certs
	I0505 22:08:05.724773   53407 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0505 22:08:05.735036   53407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem --> /etc/ssl/certs/187982.pem (1708 bytes)
	I0505 22:08:05.762536   53407 start.go:296] duration metric: took 127.321776ms for postStartSetup
	I0505 22:08:05.762581   53407 fix.go:56] duration metric: took 22.59711974s for fixHost
	I0505 22:08:05.762604   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetSSHHostname
	I0505 22:08:05.765439   53407 main.go:141] libmachine: (test-preload-006416) DBG | domain test-preload-006416 has defined MAC address 52:54:00:e2:0a:4a in network mk-test-preload-006416
	I0505 22:08:05.765755   53407 main.go:141] libmachine: (test-preload-006416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:0a:4a", ip: ""} in network mk-test-preload-006416: {Iface:virbr1 ExpiryTime:2024-05-05 23:07:55 +0000 UTC Type:0 Mac:52:54:00:e2:0a:4a Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:test-preload-006416 Clientid:01:52:54:00:e2:0a:4a}
	I0505 22:08:05.765780   53407 main.go:141] libmachine: (test-preload-006416) DBG | domain test-preload-006416 has defined IP address 192.168.39.118 and MAC address 52:54:00:e2:0a:4a in network mk-test-preload-006416
	I0505 22:08:05.765995   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetSSHPort
	I0505 22:08:05.766177   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetSSHKeyPath
	I0505 22:08:05.766338   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetSSHKeyPath
	I0505 22:08:05.766476   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetSSHUsername
	I0505 22:08:05.766682   53407 main.go:141] libmachine: Using SSH client type: native
	I0505 22:08:05.766890   53407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0505 22:08:05.766909   53407 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0505 22:08:05.864532   53407 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714946885.827423489
	
	I0505 22:08:05.864562   53407 fix.go:216] guest clock: 1714946885.827423489
	I0505 22:08:05.864569   53407 fix.go:229] Guest: 2024-05-05 22:08:05.827423489 +0000 UTC Remote: 2024-05-05 22:08:05.762586013 +0000 UTC m=+36.317930836 (delta=64.837476ms)
	I0505 22:08:05.864587   53407 fix.go:200] guest clock delta is within tolerance: 64.837476ms
	I0505 22:08:05.864592   53407 start.go:83] releasing machines lock for "test-preload-006416", held for 22.699149613s
	I0505 22:08:05.864608   53407 main.go:141] libmachine: (test-preload-006416) Calling .DriverName
	I0505 22:08:05.864871   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetIP
	I0505 22:08:05.867836   53407 main.go:141] libmachine: (test-preload-006416) DBG | domain test-preload-006416 has defined MAC address 52:54:00:e2:0a:4a in network mk-test-preload-006416
	I0505 22:08:05.868195   53407 main.go:141] libmachine: (test-preload-006416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:0a:4a", ip: ""} in network mk-test-preload-006416: {Iface:virbr1 ExpiryTime:2024-05-05 23:07:55 +0000 UTC Type:0 Mac:52:54:00:e2:0a:4a Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:test-preload-006416 Clientid:01:52:54:00:e2:0a:4a}
	I0505 22:08:05.868229   53407 main.go:141] libmachine: (test-preload-006416) DBG | domain test-preload-006416 has defined IP address 192.168.39.118 and MAC address 52:54:00:e2:0a:4a in network mk-test-preload-006416
	I0505 22:08:05.868328   53407 main.go:141] libmachine: (test-preload-006416) Calling .DriverName
	I0505 22:08:05.868767   53407 main.go:141] libmachine: (test-preload-006416) Calling .DriverName
	I0505 22:08:05.868946   53407 main.go:141] libmachine: (test-preload-006416) Calling .DriverName
	I0505 22:08:05.869015   53407 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0505 22:08:05.869056   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetSSHHostname
	I0505 22:08:05.869196   53407 ssh_runner.go:195] Run: cat /version.json
	I0505 22:08:05.869222   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetSSHHostname
	I0505 22:08:05.871696   53407 main.go:141] libmachine: (test-preload-006416) DBG | domain test-preload-006416 has defined MAC address 52:54:00:e2:0a:4a in network mk-test-preload-006416
	I0505 22:08:05.871882   53407 main.go:141] libmachine: (test-preload-006416) DBG | domain test-preload-006416 has defined MAC address 52:54:00:e2:0a:4a in network mk-test-preload-006416
	I0505 22:08:05.872185   53407 main.go:141] libmachine: (test-preload-006416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:0a:4a", ip: ""} in network mk-test-preload-006416: {Iface:virbr1 ExpiryTime:2024-05-05 23:07:55 +0000 UTC Type:0 Mac:52:54:00:e2:0a:4a Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:test-preload-006416 Clientid:01:52:54:00:e2:0a:4a}
	I0505 22:08:05.872214   53407 main.go:141] libmachine: (test-preload-006416) DBG | domain test-preload-006416 has defined IP address 192.168.39.118 and MAC address 52:54:00:e2:0a:4a in network mk-test-preload-006416
	I0505 22:08:05.872247   53407 main.go:141] libmachine: (test-preload-006416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:0a:4a", ip: ""} in network mk-test-preload-006416: {Iface:virbr1 ExpiryTime:2024-05-05 23:07:55 +0000 UTC Type:0 Mac:52:54:00:e2:0a:4a Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:test-preload-006416 Clientid:01:52:54:00:e2:0a:4a}
	I0505 22:08:05.872270   53407 main.go:141] libmachine: (test-preload-006416) DBG | domain test-preload-006416 has defined IP address 192.168.39.118 and MAC address 52:54:00:e2:0a:4a in network mk-test-preload-006416
	I0505 22:08:05.872277   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetSSHPort
	I0505 22:08:05.872447   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetSSHKeyPath
	I0505 22:08:05.872478   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetSSHPort
	I0505 22:08:05.872635   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetSSHUsername
	I0505 22:08:05.872646   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetSSHKeyPath
	I0505 22:08:05.872812   53407 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/test-preload-006416/id_rsa Username:docker}
	I0505 22:08:05.872825   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetSSHUsername
	I0505 22:08:05.872980   53407 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/test-preload-006416/id_rsa Username:docker}
	I0505 22:08:05.948419   53407 ssh_runner.go:195] Run: systemctl --version
	I0505 22:08:05.968279   53407 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0505 22:08:06.117477   53407 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0505 22:08:06.124639   53407 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0505 22:08:06.124711   53407 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0505 22:08:06.142764   53407 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0505 22:08:06.142789   53407 start.go:494] detecting cgroup driver to use...
	I0505 22:08:06.142847   53407 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0505 22:08:06.163662   53407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0505 22:08:06.180426   53407 docker.go:217] disabling cri-docker service (if available) ...
	I0505 22:08:06.180484   53407 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0505 22:08:06.196608   53407 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0505 22:08:06.212683   53407 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0505 22:08:06.339353   53407 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0505 22:08:06.504030   53407 docker.go:233] disabling docker service ...
	I0505 22:08:06.504139   53407 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0505 22:08:06.520722   53407 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0505 22:08:06.535794   53407 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0505 22:08:06.707666   53407 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0505 22:08:06.838254   53407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0505 22:08:06.854233   53407 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0505 22:08:06.875366   53407 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0505 22:08:06.875423   53407 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 22:08:06.886732   53407 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0505 22:08:06.886791   53407 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 22:08:06.898102   53407 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 22:08:06.909318   53407 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 22:08:06.920723   53407 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0505 22:08:06.932352   53407 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 22:08:06.943525   53407 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 22:08:06.963206   53407 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 22:08:06.974604   53407 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0505 22:08:06.984781   53407 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0505 22:08:06.984847   53407 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0505 22:08:06.999112   53407 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0505 22:08:07.009516   53407 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 22:08:07.124416   53407 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0505 22:08:07.268136   53407 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0505 22:08:07.268221   53407 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0505 22:08:07.274086   53407 start.go:562] Will wait 60s for crictl version
	I0505 22:08:07.274139   53407 ssh_runner.go:195] Run: which crictl
	I0505 22:08:07.278275   53407 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0505 22:08:07.318692   53407 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0505 22:08:07.318778   53407 ssh_runner.go:195] Run: crio --version
	I0505 22:08:07.348770   53407 ssh_runner.go:195] Run: crio --version
	I0505 22:08:07.384157   53407 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0505 22:08:07.385837   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetIP
	I0505 22:08:07.388498   53407 main.go:141] libmachine: (test-preload-006416) DBG | domain test-preload-006416 has defined MAC address 52:54:00:e2:0a:4a in network mk-test-preload-006416
	I0505 22:08:07.388913   53407 main.go:141] libmachine: (test-preload-006416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:0a:4a", ip: ""} in network mk-test-preload-006416: {Iface:virbr1 ExpiryTime:2024-05-05 23:07:55 +0000 UTC Type:0 Mac:52:54:00:e2:0a:4a Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:test-preload-006416 Clientid:01:52:54:00:e2:0a:4a}
	I0505 22:08:07.388945   53407 main.go:141] libmachine: (test-preload-006416) DBG | domain test-preload-006416 has defined IP address 192.168.39.118 and MAC address 52:54:00:e2:0a:4a in network mk-test-preload-006416
	I0505 22:08:07.389139   53407 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0505 22:08:07.393829   53407 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0505 22:08:07.407877   53407 kubeadm.go:877] updating cluster {Name:test-preload-006416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-006416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.118 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0505 22:08:07.407987   53407 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0505 22:08:07.408028   53407 ssh_runner.go:195] Run: sudo crictl images --output json
	I0505 22:08:07.457961   53407 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0505 22:08:07.458037   53407 ssh_runner.go:195] Run: which lz4
	I0505 22:08:07.463065   53407 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0505 22:08:07.468052   53407 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0505 22:08:07.468083   53407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0505 22:08:09.385349   53407 crio.go:462] duration metric: took 1.922311686s to copy over tarball
	I0505 22:08:09.385421   53407 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0505 22:08:12.119303   53407 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.733850888s)
	I0505 22:08:12.119335   53407 crio.go:469] duration metric: took 2.733951842s to extract the tarball
	I0505 22:08:12.119342   53407 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0505 22:08:12.163170   53407 ssh_runner.go:195] Run: sudo crictl images --output json
	I0505 22:08:12.222055   53407 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0505 22:08:12.222078   53407 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0505 22:08:12.222153   53407 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0505 22:08:12.222182   53407 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0505 22:08:12.222196   53407 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0505 22:08:12.222240   53407 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0505 22:08:12.222273   53407 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0505 22:08:12.222288   53407 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0505 22:08:12.222304   53407 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0505 22:08:12.222157   53407 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0505 22:08:12.223541   53407 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0505 22:08:12.223595   53407 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0505 22:08:12.223603   53407 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0505 22:08:12.223610   53407 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0505 22:08:12.223611   53407 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0505 22:08:12.223622   53407 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0505 22:08:12.223631   53407 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0505 22:08:12.223543   53407 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0505 22:08:12.418395   53407 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0505 22:08:12.432008   53407 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0505 22:08:12.477800   53407 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0505 22:08:12.477854   53407 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0505 22:08:12.477902   53407 ssh_runner.go:195] Run: which crictl
	I0505 22:08:12.505557   53407 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0505 22:08:12.505619   53407 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0505 22:08:12.505652   53407 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0505 22:08:12.505692   53407 ssh_runner.go:195] Run: which crictl
	I0505 22:08:12.546750   53407 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18602-11466/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0505 22:08:12.546764   53407 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0505 22:08:12.546849   53407 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0505 22:08:12.552815   53407 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0505 22:08:12.554009   53407 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0505 22:08:12.555657   53407 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0505 22:08:12.558701   53407 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0505 22:08:12.580943   53407 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0505 22:08:12.661413   53407 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0505 22:08:12.661440   53407 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0505 22:08:12.661494   53407 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0505 22:08:12.661502   53407 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18602-11466/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0505 22:08:12.661588   53407 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0505 22:08:12.668313   53407 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0505 22:08:12.668343   53407 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0505 22:08:12.668376   53407 ssh_runner.go:195] Run: which crictl
	I0505 22:08:12.734299   53407 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0505 22:08:12.734336   53407 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0505 22:08:12.734353   53407 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0505 22:08:12.734351   53407 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0505 22:08:12.734334   53407 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0505 22:08:12.734436   53407 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0505 22:08:12.734472   53407 ssh_runner.go:195] Run: which crictl
	I0505 22:08:12.734397   53407 ssh_runner.go:195] Run: which crictl
	I0505 22:08:12.734397   53407 ssh_runner.go:195] Run: which crictl
	I0505 22:08:12.739285   53407 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0505 22:08:12.739318   53407 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0505 22:08:12.739351   53407 ssh_runner.go:195] Run: which crictl
	I0505 22:08:13.155806   53407 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0505 22:08:14.780861   53407 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: (2.119250907s)
	I0505 22:08:14.780898   53407 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0505 22:08:14.780944   53407 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4: (2.119432306s)
	I0505 22:08:14.780955   53407 ssh_runner.go:235] Completed: which crictl: (2.11256537s)
	I0505 22:08:14.780999   53407 ssh_runner.go:235] Completed: which crictl: (2.046518198s)
	I0505 22:08:14.780960   53407 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18602-11466/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0505 22:08:14.781032   53407 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0505 22:08:14.781053   53407 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0505 22:08:14.781061   53407 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0505 22:08:14.781096   53407 ssh_runner.go:235] Completed: which crictl: (2.046536969s)
	I0505 22:08:14.781135   53407 ssh_runner.go:235] Completed: which crictl: (2.046628525s)
	I0505 22:08:14.781151   53407 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0505 22:08:14.781168   53407 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0505 22:08:14.781101   53407 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0505 22:08:14.781184   53407 ssh_runner.go:235] Completed: which crictl: (2.04180816s)
	I0505 22:08:14.781218   53407 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0505 22:08:14.781243   53407 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.625405516s)
	I0505 22:08:14.911389   53407 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18602-11466/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0505 22:08:14.911531   53407 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18602-11466/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0505 22:08:14.911550   53407 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0505 22:08:14.911637   53407 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0505 22:08:15.039266   53407 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18602-11466/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0505 22:08:15.039376   53407 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18602-11466/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0505 22:08:15.039436   53407 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18602-11466/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0505 22:08:15.039461   53407 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18602-11466/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0505 22:08:15.039491   53407 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0505 22:08:15.039530   53407 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0505 22:08:15.039549   53407 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0505 22:08:15.039559   53407 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0505 22:08:15.039576   53407 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0505 22:08:15.039580   53407 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0505 22:08:15.039617   53407 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0505 22:08:15.050362   53407 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0505 22:08:15.050413   53407 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0505 22:08:15.050479   53407 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0505 22:08:15.799237   53407 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18602-11466/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0505 22:08:15.799285   53407 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0505 22:08:15.799332   53407 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0505 22:08:16.548501   53407 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18602-11466/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0505 22:08:16.548545   53407 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0505 22:08:16.548595   53407 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0505 22:08:17.007251   53407 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18602-11466/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0505 22:08:17.007304   53407 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0505 22:08:17.007359   53407 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0505 22:08:17.470444   53407 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18602-11466/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0505 22:08:17.470490   53407 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0505 22:08:17.470573   53407 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0505 22:08:19.730832   53407 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.260230974s)
	I0505 22:08:19.730874   53407 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18602-11466/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0505 22:08:19.730906   53407 cache_images.go:123] Successfully loaded all cached images
	I0505 22:08:19.730912   53407 cache_images.go:92] duration metric: took 7.508823279s to LoadCachedImages
	I0505 22:08:19.730925   53407 kubeadm.go:928] updating node { 192.168.39.118 8443 v1.24.4 crio true true} ...
	I0505 22:08:19.731028   53407 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-006416 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.118
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-006416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0505 22:08:19.731106   53407 ssh_runner.go:195] Run: crio config
	I0505 22:08:19.787111   53407 cni.go:84] Creating CNI manager for ""
	I0505 22:08:19.787139   53407 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0505 22:08:19.787153   53407 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0505 22:08:19.787171   53407 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.118 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-006416 NodeName:test-preload-006416 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.118"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.118 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0505 22:08:19.787305   53407 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.118
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-006416"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.118
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.118"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0505 22:08:19.787368   53407 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0505 22:08:19.799247   53407 binaries.go:44] Found k8s binaries, skipping transfer
	I0505 22:08:19.799321   53407 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0505 22:08:19.810605   53407 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0505 22:08:19.829146   53407 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0505 22:08:19.848103   53407 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0505 22:08:19.867947   53407 ssh_runner.go:195] Run: grep 192.168.39.118	control-plane.minikube.internal$ /etc/hosts
	I0505 22:08:19.872399   53407 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.118	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0505 22:08:19.887034   53407 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 22:08:20.030809   53407 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0505 22:08:20.050783   53407 certs.go:68] Setting up /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/test-preload-006416 for IP: 192.168.39.118
	I0505 22:08:20.050811   53407 certs.go:194] generating shared ca certs ...
	I0505 22:08:20.050832   53407 certs.go:226] acquiring lock for ca certs: {Name:mk9a8f97724855697074b08194c6247f8f9e5c84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 22:08:20.051005   53407 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key
	I0505 22:08:20.051059   53407 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key
	I0505 22:08:20.051072   53407 certs.go:256] generating profile certs ...
	I0505 22:08:20.051189   53407 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/test-preload-006416/client.key
	I0505 22:08:20.051276   53407 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/test-preload-006416/apiserver.key.88065ea8
	I0505 22:08:20.051328   53407 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/test-preload-006416/proxy-client.key
	I0505 22:08:20.051474   53407 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798.pem (1338 bytes)
	W0505 22:08:20.051529   53407 certs.go:480] ignoring /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798_empty.pem, impossibly tiny 0 bytes
	I0505 22:08:20.051542   53407 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem (1675 bytes)
	I0505 22:08:20.051574   53407 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem (1078 bytes)
	I0505 22:08:20.051602   53407 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem (1123 bytes)
	I0505 22:08:20.051632   53407 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem (1675 bytes)
	I0505 22:08:20.051689   53407 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem (1708 bytes)
	I0505 22:08:20.052401   53407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0505 22:08:20.088202   53407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0505 22:08:20.129413   53407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0505 22:08:20.165338   53407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0505 22:08:20.198692   53407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/test-preload-006416/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0505 22:08:20.244940   53407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/test-preload-006416/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0505 22:08:20.275255   53407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/test-preload-006416/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0505 22:08:20.301677   53407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/test-preload-006416/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I0505 22:08:20.328459   53407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem --> /usr/share/ca-certificates/187982.pem (1708 bytes)
	I0505 22:08:20.355010   53407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0505 22:08:20.381398   53407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798.pem --> /usr/share/ca-certificates/18798.pem (1338 bytes)
	I0505 22:08:20.407589   53407 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0505 22:08:20.427164   53407 ssh_runner.go:195] Run: openssl version
	I0505 22:08:20.434323   53407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0505 22:08:20.447916   53407 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0505 22:08:20.453736   53407 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  5 20:58 /usr/share/ca-certificates/minikubeCA.pem
	I0505 22:08:20.453807   53407 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0505 22:08:20.460396   53407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0505 22:08:20.473246   53407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18798.pem && ln -fs /usr/share/ca-certificates/18798.pem /etc/ssl/certs/18798.pem"
	I0505 22:08:20.486146   53407 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18798.pem
	I0505 22:08:20.491651   53407 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  5 21:11 /usr/share/ca-certificates/18798.pem
	I0505 22:08:20.491716   53407 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18798.pem
	I0505 22:08:20.498155   53407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18798.pem /etc/ssl/certs/51391683.0"
	I0505 22:08:20.510860   53407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/187982.pem && ln -fs /usr/share/ca-certificates/187982.pem /etc/ssl/certs/187982.pem"
	I0505 22:08:20.523617   53407 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/187982.pem
	I0505 22:08:20.528689   53407 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  5 21:11 /usr/share/ca-certificates/187982.pem
	I0505 22:08:20.528735   53407 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/187982.pem
	I0505 22:08:20.535068   53407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/187982.pem /etc/ssl/certs/3ec20f2e.0"
	I0505 22:08:20.547788   53407 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0505 22:08:20.552964   53407 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0505 22:08:20.559551   53407 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0505 22:08:20.567734   53407 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0505 22:08:20.574036   53407 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0505 22:08:20.580561   53407 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0505 22:08:20.586889   53407 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0505 22:08:20.593213   53407 kubeadm.go:391] StartCluster: {Name:test-preload-006416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-006416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.118 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 22:08:20.593283   53407 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0505 22:08:20.593318   53407 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0505 22:08:20.633548   53407 cri.go:89] found id: ""
	I0505 22:08:20.633617   53407 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0505 22:08:20.645624   53407 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0505 22:08:20.645644   53407 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0505 22:08:20.645648   53407 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0505 22:08:20.645722   53407 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0505 22:08:20.657242   53407 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0505 22:08:20.657646   53407 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-006416" does not appear in /home/jenkins/minikube-integration/18602-11466/kubeconfig
	I0505 22:08:20.657791   53407 kubeconfig.go:62] /home/jenkins/minikube-integration/18602-11466/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-006416" cluster setting kubeconfig missing "test-preload-006416" context setting]
	I0505 22:08:20.658096   53407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/kubeconfig: {Name:mk083e789ff55e795c8f0eb5d298a0e27ad9cdde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 22:08:20.658727   53407 kapi.go:59] client config for test-preload-006416: &rest.Config{Host:"https://192.168.39.118:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18602-11466/.minikube/profiles/test-preload-006416/client.crt", KeyFile:"/home/jenkins/minikube-integration/18602-11466/.minikube/profiles/test-preload-006416/client.key", CAFile:"/home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02a40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0505 22:08:20.659282   53407 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0505 22:08:20.669874   53407 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.118
	I0505 22:08:20.669900   53407 kubeadm.go:1154] stopping kube-system containers ...
	I0505 22:08:20.669920   53407 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0505 22:08:20.669962   53407 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0505 22:08:20.708888   53407 cri.go:89] found id: ""
	I0505 22:08:20.708964   53407 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0505 22:08:20.728794   53407 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0505 22:08:20.740090   53407 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0505 22:08:20.740115   53407 kubeadm.go:156] found existing configuration files:
	
	I0505 22:08:20.740165   53407 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0505 22:08:20.751064   53407 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0505 22:08:20.751125   53407 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0505 22:08:20.762317   53407 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0505 22:08:20.773004   53407 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0505 22:08:20.773063   53407 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0505 22:08:20.783781   53407 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0505 22:08:20.794345   53407 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0505 22:08:20.794415   53407 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0505 22:08:20.805570   53407 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0505 22:08:20.816447   53407 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0505 22:08:20.816507   53407 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0505 22:08:20.827460   53407 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0505 22:08:20.838732   53407 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0505 22:08:20.937105   53407 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0505 22:08:21.824497   53407 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0505 22:08:22.119919   53407 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0505 22:08:22.221464   53407 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0505 22:08:22.361504   53407 api_server.go:52] waiting for apiserver process to appear ...
	I0505 22:08:22.361582   53407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:08:22.862510   53407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:08:23.362444   53407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:08:23.439399   53407 api_server.go:72] duration metric: took 1.077897127s to wait for apiserver process to appear ...
	I0505 22:08:23.439425   53407 api_server.go:88] waiting for apiserver healthz status ...
	I0505 22:08:23.439443   53407 api_server.go:253] Checking apiserver healthz at https://192.168.39.118:8443/healthz ...
	I0505 22:08:23.439954   53407 api_server.go:269] stopped: https://192.168.39.118:8443/healthz: Get "https://192.168.39.118:8443/healthz": dial tcp 192.168.39.118:8443: connect: connection refused
	I0505 22:08:23.939743   53407 api_server.go:253] Checking apiserver healthz at https://192.168.39.118:8443/healthz ...
	I0505 22:08:27.282350   53407 api_server.go:279] https://192.168.39.118:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0505 22:08:27.282382   53407 api_server.go:103] status: https://192.168.39.118:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0505 22:08:27.282396   53407 api_server.go:253] Checking apiserver healthz at https://192.168.39.118:8443/healthz ...
	I0505 22:08:27.351842   53407 api_server.go:279] https://192.168.39.118:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0505 22:08:27.351878   53407 api_server.go:103] status: https://192.168.39.118:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0505 22:08:27.440129   53407 api_server.go:253] Checking apiserver healthz at https://192.168.39.118:8443/healthz ...
	I0505 22:08:27.447899   53407 api_server.go:279] https://192.168.39.118:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0505 22:08:27.447926   53407 api_server.go:103] status: https://192.168.39.118:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0505 22:08:27.940530   53407 api_server.go:253] Checking apiserver healthz at https://192.168.39.118:8443/healthz ...
	I0505 22:08:27.946328   53407 api_server.go:279] https://192.168.39.118:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0505 22:08:27.946366   53407 api_server.go:103] status: https://192.168.39.118:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0505 22:08:28.440249   53407 api_server.go:253] Checking apiserver healthz at https://192.168.39.118:8443/healthz ...
	I0505 22:08:28.448909   53407 api_server.go:279] https://192.168.39.118:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0505 22:08:28.448944   53407 api_server.go:103] status: https://192.168.39.118:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0505 22:08:28.939533   53407 api_server.go:253] Checking apiserver healthz at https://192.168.39.118:8443/healthz ...
	I0505 22:08:28.946030   53407 api_server.go:279] https://192.168.39.118:8443/healthz returned 200:
	ok
	I0505 22:08:28.953109   53407 api_server.go:141] control plane version: v1.24.4
	I0505 22:08:28.953131   53407 api_server.go:131] duration metric: took 5.513700998s to wait for apiserver health ...
	I0505 22:08:28.953140   53407 cni.go:84] Creating CNI manager for ""
	I0505 22:08:28.953146   53407 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0505 22:08:28.954916   53407 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0505 22:08:28.956155   53407 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0505 22:08:28.968336   53407 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0505 22:08:28.988371   53407 system_pods.go:43] waiting for kube-system pods to appear ...
	I0505 22:08:28.997953   53407 system_pods.go:59] 8 kube-system pods found
	I0505 22:08:28.997981   53407 system_pods.go:61] "coredns-6d4b75cb6d-g2sfj" [4d2d8224-a416-4a72-92e8-d3f7bb666d69] Running
	I0505 22:08:28.997987   53407 system_pods.go:61] "coredns-6d4b75cb6d-grf5s" [bbaf103d-8410-4e62-b7d3-b20e3acc5190] Running
	I0505 22:08:28.997992   53407 system_pods.go:61] "etcd-test-preload-006416" [7441f4d9-9c87-4c77-9394-18b170c6b89a] Running
	I0505 22:08:28.997997   53407 system_pods.go:61] "kube-apiserver-test-preload-006416" [16375186-95e2-40b6-ae77-c913c9168339] Running
	I0505 22:08:28.998005   53407 system_pods.go:61] "kube-controller-manager-test-preload-006416" [409db58b-c2a6-4005-93bf-8332a8974323] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0505 22:08:28.998015   53407 system_pods.go:61] "kube-proxy-x8vpw" [c645af09-cad1-412e-a26b-7f1d7afe8240] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0505 22:08:28.998025   53407 system_pods.go:61] "kube-scheduler-test-preload-006416" [ed1c4c02-83ca-408c-bc79-0dea1567a059] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0505 22:08:28.998033   53407 system_pods.go:61] "storage-provisioner" [6050fe65-0fb2-4d13-ac30-866b9a82057d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0505 22:08:28.998045   53407 system_pods.go:74] duration metric: took 9.652279ms to wait for pod list to return data ...
	I0505 22:08:28.998062   53407 node_conditions.go:102] verifying NodePressure condition ...
	I0505 22:08:29.000856   53407 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0505 22:08:29.000878   53407 node_conditions.go:123] node cpu capacity is 2
	I0505 22:08:29.000889   53407 node_conditions.go:105] duration metric: took 2.821017ms to run NodePressure ...
	I0505 22:08:29.000906   53407 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0505 22:08:29.219506   53407 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0505 22:08:29.226237   53407 kubeadm.go:733] kubelet initialised
	I0505 22:08:29.226267   53407 kubeadm.go:734] duration metric: took 6.732552ms waiting for restarted kubelet to initialise ...
	I0505 22:08:29.226277   53407 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0505 22:08:29.237556   53407 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-g2sfj" in "kube-system" namespace to be "Ready" ...
	I0505 22:08:29.246298   53407 pod_ready.go:97] node "test-preload-006416" hosting pod "coredns-6d4b75cb6d-g2sfj" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-006416" has status "Ready":"False"
	I0505 22:08:29.246327   53407 pod_ready.go:81] duration metric: took 8.737702ms for pod "coredns-6d4b75cb6d-g2sfj" in "kube-system" namespace to be "Ready" ...
	E0505 22:08:29.246339   53407 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-006416" hosting pod "coredns-6d4b75cb6d-g2sfj" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-006416" has status "Ready":"False"
	I0505 22:08:29.246348   53407 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-grf5s" in "kube-system" namespace to be "Ready" ...
	I0505 22:08:29.253009   53407 pod_ready.go:97] node "test-preload-006416" hosting pod "coredns-6d4b75cb6d-grf5s" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-006416" has status "Ready":"False"
	I0505 22:08:29.253041   53407 pod_ready.go:81] duration metric: took 6.68212ms for pod "coredns-6d4b75cb6d-grf5s" in "kube-system" namespace to be "Ready" ...
	E0505 22:08:29.253053   53407 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-006416" hosting pod "coredns-6d4b75cb6d-grf5s" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-006416" has status "Ready":"False"
	I0505 22:08:29.253062   53407 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-006416" in "kube-system" namespace to be "Ready" ...
	I0505 22:08:29.265885   53407 pod_ready.go:97] node "test-preload-006416" hosting pod "etcd-test-preload-006416" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-006416" has status "Ready":"False"
	I0505 22:08:29.265923   53407 pod_ready.go:81] duration metric: took 12.849312ms for pod "etcd-test-preload-006416" in "kube-system" namespace to be "Ready" ...
	E0505 22:08:29.265935   53407 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-006416" hosting pod "etcd-test-preload-006416" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-006416" has status "Ready":"False"
	I0505 22:08:29.265945   53407 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-006416" in "kube-system" namespace to be "Ready" ...
	I0505 22:08:29.392705   53407 pod_ready.go:97] node "test-preload-006416" hosting pod "kube-apiserver-test-preload-006416" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-006416" has status "Ready":"False"
	I0505 22:08:29.392738   53407 pod_ready.go:81] duration metric: took 126.781865ms for pod "kube-apiserver-test-preload-006416" in "kube-system" namespace to be "Ready" ...
	E0505 22:08:29.392749   53407 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-006416" hosting pod "kube-apiserver-test-preload-006416" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-006416" has status "Ready":"False"
	I0505 22:08:29.392761   53407 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-006416" in "kube-system" namespace to be "Ready" ...
	I0505 22:08:29.792427   53407 pod_ready.go:97] node "test-preload-006416" hosting pod "kube-controller-manager-test-preload-006416" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-006416" has status "Ready":"False"
	I0505 22:08:29.792455   53407 pod_ready.go:81] duration metric: took 399.676685ms for pod "kube-controller-manager-test-preload-006416" in "kube-system" namespace to be "Ready" ...
	E0505 22:08:29.792464   53407 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-006416" hosting pod "kube-controller-manager-test-preload-006416" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-006416" has status "Ready":"False"
	I0505 22:08:29.792470   53407 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-x8vpw" in "kube-system" namespace to be "Ready" ...
	I0505 22:08:30.192671   53407 pod_ready.go:97] node "test-preload-006416" hosting pod "kube-proxy-x8vpw" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-006416" has status "Ready":"False"
	I0505 22:08:30.192711   53407 pod_ready.go:81] duration metric: took 400.231806ms for pod "kube-proxy-x8vpw" in "kube-system" namespace to be "Ready" ...
	E0505 22:08:30.192723   53407 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-006416" hosting pod "kube-proxy-x8vpw" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-006416" has status "Ready":"False"
	I0505 22:08:30.192733   53407 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-006416" in "kube-system" namespace to be "Ready" ...
	I0505 22:08:30.592050   53407 pod_ready.go:97] node "test-preload-006416" hosting pod "kube-scheduler-test-preload-006416" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-006416" has status "Ready":"False"
	I0505 22:08:30.592083   53407 pod_ready.go:81] duration metric: took 399.34098ms for pod "kube-scheduler-test-preload-006416" in "kube-system" namespace to be "Ready" ...
	E0505 22:08:30.592092   53407 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-006416" hosting pod "kube-scheduler-test-preload-006416" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-006416" has status "Ready":"False"
	I0505 22:08:30.592099   53407 pod_ready.go:38] duration metric: took 1.365800326s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0505 22:08:30.592115   53407 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0505 22:08:30.604883   53407 ops.go:34] apiserver oom_adj: -16
	I0505 22:08:30.604908   53407 kubeadm.go:591] duration metric: took 9.959253564s to restartPrimaryControlPlane
	I0505 22:08:30.604917   53407 kubeadm.go:393] duration metric: took 10.011720026s to StartCluster
	I0505 22:08:30.604950   53407 settings.go:142] acquiring lock: {Name:mkbe19b7965e4b0b9928cd2b7b56f51dec95b157 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 22:08:30.605033   53407 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18602-11466/kubeconfig
	I0505 22:08:30.605756   53407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/kubeconfig: {Name:mk083e789ff55e795c8f0eb5d298a0e27ad9cdde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 22:08:30.605991   53407 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.118 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0505 22:08:30.607824   53407 out.go:177] * Verifying Kubernetes components...
	I0505 22:08:30.606035   53407 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0505 22:08:30.606167   53407 config.go:182] Loaded profile config "test-preload-006416": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0505 22:08:30.609148   53407 addons.go:69] Setting storage-provisioner=true in profile "test-preload-006416"
	I0505 22:08:30.609201   53407 addons.go:234] Setting addon storage-provisioner=true in "test-preload-006416"
	I0505 22:08:30.609193   53407 addons.go:69] Setting default-storageclass=true in profile "test-preload-006416"
	I0505 22:08:30.609239   53407 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-006416"
	W0505 22:08:30.609211   53407 addons.go:243] addon storage-provisioner should already be in state true
	I0505 22:08:30.609278   53407 host.go:66] Checking if "test-preload-006416" exists ...
	I0505 22:08:30.609156   53407 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 22:08:30.609572   53407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 22:08:30.609607   53407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 22:08:30.609607   53407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 22:08:30.609650   53407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 22:08:30.624532   53407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38501
	I0505 22:08:30.624557   53407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35879
	I0505 22:08:30.624971   53407 main.go:141] libmachine: () Calling .GetVersion
	I0505 22:08:30.624998   53407 main.go:141] libmachine: () Calling .GetVersion
	I0505 22:08:30.625446   53407 main.go:141] libmachine: Using API Version  1
	I0505 22:08:30.625461   53407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 22:08:30.625576   53407 main.go:141] libmachine: Using API Version  1
	I0505 22:08:30.625599   53407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 22:08:30.625797   53407 main.go:141] libmachine: () Calling .GetMachineName
	I0505 22:08:30.625948   53407 main.go:141] libmachine: () Calling .GetMachineName
	I0505 22:08:30.626028   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetState
	I0505 22:08:30.626489   53407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 22:08:30.626539   53407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 22:08:30.628365   53407 kapi.go:59] client config for test-preload-006416: &rest.Config{Host:"https://192.168.39.118:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18602-11466/.minikube/profiles/test-preload-006416/client.crt", KeyFile:"/home/jenkins/minikube-integration/18602-11466/.minikube/profiles/test-preload-006416/client.key", CAFile:"/home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02a40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0505 22:08:30.628724   53407 addons.go:234] Setting addon default-storageclass=true in "test-preload-006416"
	W0505 22:08:30.628743   53407 addons.go:243] addon default-storageclass should already be in state true
	I0505 22:08:30.628770   53407 host.go:66] Checking if "test-preload-006416" exists ...
	I0505 22:08:30.629007   53407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 22:08:30.629043   53407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 22:08:30.641287   53407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46493
	I0505 22:08:30.641837   53407 main.go:141] libmachine: () Calling .GetVersion
	I0505 22:08:30.642393   53407 main.go:141] libmachine: Using API Version  1
	I0505 22:08:30.642426   53407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 22:08:30.642750   53407 main.go:141] libmachine: () Calling .GetMachineName
	I0505 22:08:30.642799   53407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45951
	I0505 22:08:30.642917   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetState
	I0505 22:08:30.643220   53407 main.go:141] libmachine: () Calling .GetVersion
	I0505 22:08:30.643706   53407 main.go:141] libmachine: Using API Version  1
	I0505 22:08:30.643734   53407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 22:08:30.644076   53407 main.go:141] libmachine: () Calling .GetMachineName
	I0505 22:08:30.644569   53407 main.go:141] libmachine: (test-preload-006416) Calling .DriverName
	I0505 22:08:30.644660   53407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 22:08:30.644709   53407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 22:08:30.646807   53407 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0505 22:08:30.648690   53407 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0505 22:08:30.648708   53407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0505 22:08:30.648720   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetSSHHostname
	I0505 22:08:30.652197   53407 main.go:141] libmachine: (test-preload-006416) DBG | domain test-preload-006416 has defined MAC address 52:54:00:e2:0a:4a in network mk-test-preload-006416
	I0505 22:08:30.652669   53407 main.go:141] libmachine: (test-preload-006416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:0a:4a", ip: ""} in network mk-test-preload-006416: {Iface:virbr1 ExpiryTime:2024-05-05 23:07:55 +0000 UTC Type:0 Mac:52:54:00:e2:0a:4a Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:test-preload-006416 Clientid:01:52:54:00:e2:0a:4a}
	I0505 22:08:30.652704   53407 main.go:141] libmachine: (test-preload-006416) DBG | domain test-preload-006416 has defined IP address 192.168.39.118 and MAC address 52:54:00:e2:0a:4a in network mk-test-preload-006416
	I0505 22:08:30.652903   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetSSHPort
	I0505 22:08:30.653059   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetSSHKeyPath
	I0505 22:08:30.653227   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetSSHUsername
	I0505 22:08:30.653391   53407 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/test-preload-006416/id_rsa Username:docker}
	I0505 22:08:30.660077   53407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43045
	I0505 22:08:30.660478   53407 main.go:141] libmachine: () Calling .GetVersion
	I0505 22:08:30.660992   53407 main.go:141] libmachine: Using API Version  1
	I0505 22:08:30.661010   53407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 22:08:30.661385   53407 main.go:141] libmachine: () Calling .GetMachineName
	I0505 22:08:30.661598   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetState
	I0505 22:08:30.663391   53407 main.go:141] libmachine: (test-preload-006416) Calling .DriverName
	I0505 22:08:30.663660   53407 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0505 22:08:30.663678   53407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0505 22:08:30.663707   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetSSHHostname
	I0505 22:08:30.666803   53407 main.go:141] libmachine: (test-preload-006416) DBG | domain test-preload-006416 has defined MAC address 52:54:00:e2:0a:4a in network mk-test-preload-006416
	I0505 22:08:30.667250   53407 main.go:141] libmachine: (test-preload-006416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:0a:4a", ip: ""} in network mk-test-preload-006416: {Iface:virbr1 ExpiryTime:2024-05-05 23:07:55 +0000 UTC Type:0 Mac:52:54:00:e2:0a:4a Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:test-preload-006416 Clientid:01:52:54:00:e2:0a:4a}
	I0505 22:08:30.667287   53407 main.go:141] libmachine: (test-preload-006416) DBG | domain test-preload-006416 has defined IP address 192.168.39.118 and MAC address 52:54:00:e2:0a:4a in network mk-test-preload-006416
	I0505 22:08:30.667453   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetSSHPort
	I0505 22:08:30.667632   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetSSHKeyPath
	I0505 22:08:30.667817   53407 main.go:141] libmachine: (test-preload-006416) Calling .GetSSHUsername
	I0505 22:08:30.667992   53407 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/test-preload-006416/id_rsa Username:docker}
	I0505 22:08:30.811681   53407 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0505 22:08:30.829891   53407 node_ready.go:35] waiting up to 6m0s for node "test-preload-006416" to be "Ready" ...
	I0505 22:08:30.945742   53407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0505 22:08:30.947417   53407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0505 22:08:31.932308   53407 main.go:141] libmachine: Making call to close driver server
	I0505 22:08:31.932341   53407 main.go:141] libmachine: (test-preload-006416) Calling .Close
	I0505 22:08:31.932351   53407 main.go:141] libmachine: Making call to close driver server
	I0505 22:08:31.932362   53407 main.go:141] libmachine: (test-preload-006416) Calling .Close
	I0505 22:08:31.932644   53407 main.go:141] libmachine: Successfully made call to close driver server
	I0505 22:08:31.932696   53407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 22:08:31.932716   53407 main.go:141] libmachine: Making call to close driver server
	I0505 22:08:31.932729   53407 main.go:141] libmachine: (test-preload-006416) Calling .Close
	I0505 22:08:31.932837   53407 main.go:141] libmachine: (test-preload-006416) DBG | Closing plugin on server side
	I0505 22:08:31.932861   53407 main.go:141] libmachine: Successfully made call to close driver server
	I0505 22:08:31.932903   53407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 22:08:31.932920   53407 main.go:141] libmachine: Making call to close driver server
	I0505 22:08:31.932949   53407 main.go:141] libmachine: (test-preload-006416) Calling .Close
	I0505 22:08:31.933069   53407 main.go:141] libmachine: (test-preload-006416) DBG | Closing plugin on server side
	I0505 22:08:31.933128   53407 main.go:141] libmachine: Successfully made call to close driver server
	I0505 22:08:31.933138   53407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 22:08:31.934427   53407 main.go:141] libmachine: Successfully made call to close driver server
	I0505 22:08:31.934442   53407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 22:08:31.934453   53407 main.go:141] libmachine: (test-preload-006416) DBG | Closing plugin on server side
	I0505 22:08:31.942071   53407 main.go:141] libmachine: Making call to close driver server
	I0505 22:08:31.942093   53407 main.go:141] libmachine: (test-preload-006416) Calling .Close
	I0505 22:08:31.942365   53407 main.go:141] libmachine: Successfully made call to close driver server
	I0505 22:08:31.942385   53407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 22:08:31.942403   53407 main.go:141] libmachine: (test-preload-006416) DBG | Closing plugin on server side
	I0505 22:08:31.944248   53407 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0505 22:08:31.945700   53407 addons.go:510] duration metric: took 1.339677719s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0505 22:08:32.837534   53407 node_ready.go:53] node "test-preload-006416" has status "Ready":"False"
	I0505 22:08:35.334148   53407 node_ready.go:53] node "test-preload-006416" has status "Ready":"False"
	I0505 22:08:37.335230   53407 node_ready.go:53] node "test-preload-006416" has status "Ready":"False"
	I0505 22:08:37.833957   53407 node_ready.go:49] node "test-preload-006416" has status "Ready":"True"
	I0505 22:08:37.833980   53407 node_ready.go:38] duration metric: took 7.004059979s for node "test-preload-006416" to be "Ready" ...
	I0505 22:08:37.833990   53407 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0505 22:08:37.838862   53407 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-grf5s" in "kube-system" namespace to be "Ready" ...
	I0505 22:08:37.847192   53407 pod_ready.go:92] pod "coredns-6d4b75cb6d-grf5s" in "kube-system" namespace has status "Ready":"True"
	I0505 22:08:37.847214   53407 pod_ready.go:81] duration metric: took 8.327568ms for pod "coredns-6d4b75cb6d-grf5s" in "kube-system" namespace to be "Ready" ...
	I0505 22:08:37.847226   53407 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-006416" in "kube-system" namespace to be "Ready" ...
	I0505 22:08:39.856026   53407 pod_ready.go:102] pod "etcd-test-preload-006416" in "kube-system" namespace has status "Ready":"False"
	I0505 22:08:42.353941   53407 pod_ready.go:102] pod "etcd-test-preload-006416" in "kube-system" namespace has status "Ready":"False"
	I0505 22:08:43.353732   53407 pod_ready.go:92] pod "etcd-test-preload-006416" in "kube-system" namespace has status "Ready":"True"
	I0505 22:08:43.353756   53407 pod_ready.go:81] duration metric: took 5.506523619s for pod "etcd-test-preload-006416" in "kube-system" namespace to be "Ready" ...
	I0505 22:08:43.353765   53407 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-006416" in "kube-system" namespace to be "Ready" ...
	I0505 22:08:43.358668   53407 pod_ready.go:92] pod "kube-apiserver-test-preload-006416" in "kube-system" namespace has status "Ready":"True"
	I0505 22:08:43.358686   53407 pod_ready.go:81] duration metric: took 4.914618ms for pod "kube-apiserver-test-preload-006416" in "kube-system" namespace to be "Ready" ...
	I0505 22:08:43.358693   53407 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-006416" in "kube-system" namespace to be "Ready" ...
	I0505 22:08:43.366751   53407 pod_ready.go:92] pod "kube-controller-manager-test-preload-006416" in "kube-system" namespace has status "Ready":"True"
	I0505 22:08:43.366773   53407 pod_ready.go:81] duration metric: took 8.073344ms for pod "kube-controller-manager-test-preload-006416" in "kube-system" namespace to be "Ready" ...
	I0505 22:08:43.366786   53407 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-x8vpw" in "kube-system" namespace to be "Ready" ...
	I0505 22:08:43.372489   53407 pod_ready.go:92] pod "kube-proxy-x8vpw" in "kube-system" namespace has status "Ready":"True"
	I0505 22:08:43.372507   53407 pod_ready.go:81] duration metric: took 5.714673ms for pod "kube-proxy-x8vpw" in "kube-system" namespace to be "Ready" ...
	I0505 22:08:43.372518   53407 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-006416" in "kube-system" namespace to be "Ready" ...
	I0505 22:08:43.378482   53407 pod_ready.go:92] pod "kube-scheduler-test-preload-006416" in "kube-system" namespace has status "Ready":"True"
	I0505 22:08:43.378504   53407 pod_ready.go:81] duration metric: took 5.978039ms for pod "kube-scheduler-test-preload-006416" in "kube-system" namespace to be "Ready" ...
	I0505 22:08:43.378517   53407 pod_ready.go:38] duration metric: took 5.544515558s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0505 22:08:43.378532   53407 api_server.go:52] waiting for apiserver process to appear ...
	I0505 22:08:43.378684   53407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:08:43.395743   53407 api_server.go:72] duration metric: took 12.789725932s to wait for apiserver process to appear ...
	I0505 22:08:43.395764   53407 api_server.go:88] waiting for apiserver healthz status ...
	I0505 22:08:43.395778   53407 api_server.go:253] Checking apiserver healthz at https://192.168.39.118:8443/healthz ...
	I0505 22:08:43.400844   53407 api_server.go:279] https://192.168.39.118:8443/healthz returned 200:
	ok
	I0505 22:08:43.401850   53407 api_server.go:141] control plane version: v1.24.4
	I0505 22:08:43.401867   53407 api_server.go:131] duration metric: took 6.097601ms to wait for apiserver health ...
	I0505 22:08:43.401874   53407 system_pods.go:43] waiting for kube-system pods to appear ...
	I0505 22:08:43.555232   53407 system_pods.go:59] 7 kube-system pods found
	I0505 22:08:43.555261   53407 system_pods.go:61] "coredns-6d4b75cb6d-grf5s" [bbaf103d-8410-4e62-b7d3-b20e3acc5190] Running
	I0505 22:08:43.555265   53407 system_pods.go:61] "etcd-test-preload-006416" [7441f4d9-9c87-4c77-9394-18b170c6b89a] Running
	I0505 22:08:43.555268   53407 system_pods.go:61] "kube-apiserver-test-preload-006416" [16375186-95e2-40b6-ae77-c913c9168339] Running
	I0505 22:08:43.555272   53407 system_pods.go:61] "kube-controller-manager-test-preload-006416" [409db58b-c2a6-4005-93bf-8332a8974323] Running
	I0505 22:08:43.555275   53407 system_pods.go:61] "kube-proxy-x8vpw" [c645af09-cad1-412e-a26b-7f1d7afe8240] Running
	I0505 22:08:43.555278   53407 system_pods.go:61] "kube-scheduler-test-preload-006416" [ed1c4c02-83ca-408c-bc79-0dea1567a059] Running
	I0505 22:08:43.555280   53407 system_pods.go:61] "storage-provisioner" [6050fe65-0fb2-4d13-ac30-866b9a82057d] Running
	I0505 22:08:43.555285   53407 system_pods.go:74] duration metric: took 153.406482ms to wait for pod list to return data ...
	I0505 22:08:43.555292   53407 default_sa.go:34] waiting for default service account to be created ...
	I0505 22:08:43.751083   53407 default_sa.go:45] found service account: "default"
	I0505 22:08:43.751116   53407 default_sa.go:55] duration metric: took 195.81413ms for default service account to be created ...
	I0505 22:08:43.751125   53407 system_pods.go:116] waiting for k8s-apps to be running ...
	I0505 22:08:43.956048   53407 system_pods.go:86] 7 kube-system pods found
	I0505 22:08:43.956079   53407 system_pods.go:89] "coredns-6d4b75cb6d-grf5s" [bbaf103d-8410-4e62-b7d3-b20e3acc5190] Running
	I0505 22:08:43.956086   53407 system_pods.go:89] "etcd-test-preload-006416" [7441f4d9-9c87-4c77-9394-18b170c6b89a] Running
	I0505 22:08:43.956093   53407 system_pods.go:89] "kube-apiserver-test-preload-006416" [16375186-95e2-40b6-ae77-c913c9168339] Running
	I0505 22:08:43.956106   53407 system_pods.go:89] "kube-controller-manager-test-preload-006416" [409db58b-c2a6-4005-93bf-8332a8974323] Running
	I0505 22:08:43.956112   53407 system_pods.go:89] "kube-proxy-x8vpw" [c645af09-cad1-412e-a26b-7f1d7afe8240] Running
	I0505 22:08:43.956116   53407 system_pods.go:89] "kube-scheduler-test-preload-006416" [ed1c4c02-83ca-408c-bc79-0dea1567a059] Running
	I0505 22:08:43.956122   53407 system_pods.go:89] "storage-provisioner" [6050fe65-0fb2-4d13-ac30-866b9a82057d] Running
	I0505 22:08:43.956130   53407 system_pods.go:126] duration metric: took 204.999699ms to wait for k8s-apps to be running ...
	I0505 22:08:43.956140   53407 system_svc.go:44] waiting for kubelet service to be running ....
	I0505 22:08:43.956189   53407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 22:08:43.976131   53407 system_svc.go:56] duration metric: took 19.979887ms WaitForService to wait for kubelet
	I0505 22:08:43.976165   53407 kubeadm.go:576] duration metric: took 13.370147819s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0505 22:08:43.976187   53407 node_conditions.go:102] verifying NodePressure condition ...
	I0505 22:08:44.151730   53407 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0505 22:08:44.151755   53407 node_conditions.go:123] node cpu capacity is 2
	I0505 22:08:44.151769   53407 node_conditions.go:105] duration metric: took 175.571576ms to run NodePressure ...
	I0505 22:08:44.151780   53407 start.go:240] waiting for startup goroutines ...
	I0505 22:08:44.151787   53407 start.go:245] waiting for cluster config update ...
	I0505 22:08:44.151796   53407 start.go:254] writing updated cluster config ...
	I0505 22:08:44.152071   53407 ssh_runner.go:195] Run: rm -f paused
	I0505 22:08:44.198657   53407 start.go:600] kubectl: 1.30.0, cluster: 1.24.4 (minor skew: 6)
	I0505 22:08:44.200811   53407 out.go:177] 
	W0505 22:08:44.202401   53407 out.go:239] ! /usr/local/bin/kubectl is version 1.30.0, which may have incompatibilities with Kubernetes 1.24.4.
	I0505 22:08:44.203774   53407 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0505 22:08:44.205157   53407 out.go:177] * Done! kubectl is now configured to use "test-preload-006416" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	May 05 22:08:45 test-preload-006416 crio[701]: time="2024-05-05 22:08:45.146569554Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714946925146547242,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=75e066c3-65d5-4f85-a892-0e11b6e44454 name=/runtime.v1.ImageService/ImageFsInfo
	May 05 22:08:45 test-preload-006416 crio[701]: time="2024-05-05 22:08:45.147905800Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=78b91550-11a5-4546-9a44-e7fdf2cb46fd name=/runtime.v1.RuntimeService/ListContainers
	May 05 22:08:45 test-preload-006416 crio[701]: time="2024-05-05 22:08:45.147989505Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=78b91550-11a5-4546-9a44-e7fdf2cb46fd name=/runtime.v1.RuntimeService/ListContainers
	May 05 22:08:45 test-preload-006416 crio[701]: time="2024-05-05 22:08:45.148183636Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f8fbf303bb90cb68576a0dbe8726c2ff1339f368313bce5f48ec0b4640a285bf,PodSandboxId:3a1ff57ba8ace18ddc1ad608d0430375692ecfebdaf8d2dc5962d043f0ca8c6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1714946916553732581,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-grf5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbaf103d-8410-4e62-b7d3-b20e3acc5190,},Annotations:map[string]string{io.kubernetes.container.hash: 659e9c27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d240d9db21cf26a68192a691ec7be7aea04ed95eec56583a1b6587af90318169,PodSandboxId:7c60e786b19644b5557182022ff78b36023e63c3b29d460355924ef24ed56c2e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1714946909381826492,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x8vpw,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: c645af09-cad1-412e-a26b-7f1d7afe8240,},Annotations:map[string]string{io.kubernetes.container.hash: 86ead6ad,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:463efd331db1d714e8364c67daf084e41b9bd8748cce22c32ee2a0ee13bf9e0d,PodSandboxId:d524fce451c8a6c8b8a63baa73b7d8b6b9eadf53ecf98a6c8892dcf5cd75feb7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714946909347360175,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60
50fe65-0fb2-4d13-ac30-866b9a82057d,},Annotations:map[string]string{io.kubernetes.container.hash: 34eb2dc4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7de2d2d59a48346e42432e4141d8de37c811e8cb345677cc8f573fb3cad0dd0e,PodSandboxId:e3cd8433a2991d05ad6e559c7c3a6d55657a4b7583eac6a3fcbad615e676b4e9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1714946903091392259,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-006416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6a8d0c9a7f18c910ad86ad36819648b,},Anno
tations:map[string]string{io.kubernetes.container.hash: 9ad9e6e7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53d40e47129fb9e9bd9f7bb8b3ce1e57233201182b7658c267a1625729d0d305,PodSandboxId:6e6edc16ccf1a6cd5f44eda4c19e62bbfd03a8acd57051e1e85044d7279edd07,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1714946903119838037,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-006416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5550da831f42579c72f42d7
f9c5e76e5,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb5679a8a663778f00487599704873f17527281810cd6d96df23f086bc15c0b9,PodSandboxId:42c40012bdf2b84b29a8d67df1101e5aa832d56bc6f920715a67f27439f804a8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1714946903037537924,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-006416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b77765ac987dc48dc31bb0e145034caa,}
,Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:590d632ae443ae032b3c23a9c792dbfc86c31c94bf70265e00bf18cff3f32767,PodSandboxId:3e154fcb95d863de0f99cdbdf4cf4a626eb1454441c530d164d3c7729266285b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1714946902965698958,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-006416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 727fff748fe3be2938dad9e77a6f0dbb,},Annotation
s:map[string]string{io.kubernetes.container.hash: 4aece6ab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=78b91550-11a5-4546-9a44-e7fdf2cb46fd name=/runtime.v1.RuntimeService/ListContainers
	May 05 22:08:45 test-preload-006416 crio[701]: time="2024-05-05 22:08:45.188503651Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b9cdb506-885d-4eb1-b0c8-315eee47d62b name=/runtime.v1.RuntimeService/Version
	May 05 22:08:45 test-preload-006416 crio[701]: time="2024-05-05 22:08:45.188576027Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b9cdb506-885d-4eb1-b0c8-315eee47d62b name=/runtime.v1.RuntimeService/Version
	May 05 22:08:45 test-preload-006416 crio[701]: time="2024-05-05 22:08:45.189830243Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0752d4ef-3e6d-4ace-81c1-94cb4a614782 name=/runtime.v1.ImageService/ImageFsInfo
	May 05 22:08:45 test-preload-006416 crio[701]: time="2024-05-05 22:08:45.190268787Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714946925190248846,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0752d4ef-3e6d-4ace-81c1-94cb4a614782 name=/runtime.v1.ImageService/ImageFsInfo
	May 05 22:08:45 test-preload-006416 crio[701]: time="2024-05-05 22:08:45.191049045Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eb75b78e-7918-405c-966b-a54f6be7e6c8 name=/runtime.v1.RuntimeService/ListContainers
	May 05 22:08:45 test-preload-006416 crio[701]: time="2024-05-05 22:08:45.191131012Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eb75b78e-7918-405c-966b-a54f6be7e6c8 name=/runtime.v1.RuntimeService/ListContainers
	May 05 22:08:45 test-preload-006416 crio[701]: time="2024-05-05 22:08:45.191348320Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f8fbf303bb90cb68576a0dbe8726c2ff1339f368313bce5f48ec0b4640a285bf,PodSandboxId:3a1ff57ba8ace18ddc1ad608d0430375692ecfebdaf8d2dc5962d043f0ca8c6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1714946916553732581,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-grf5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbaf103d-8410-4e62-b7d3-b20e3acc5190,},Annotations:map[string]string{io.kubernetes.container.hash: 659e9c27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d240d9db21cf26a68192a691ec7be7aea04ed95eec56583a1b6587af90318169,PodSandboxId:7c60e786b19644b5557182022ff78b36023e63c3b29d460355924ef24ed56c2e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1714946909381826492,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x8vpw,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: c645af09-cad1-412e-a26b-7f1d7afe8240,},Annotations:map[string]string{io.kubernetes.container.hash: 86ead6ad,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:463efd331db1d714e8364c67daf084e41b9bd8748cce22c32ee2a0ee13bf9e0d,PodSandboxId:d524fce451c8a6c8b8a63baa73b7d8b6b9eadf53ecf98a6c8892dcf5cd75feb7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714946909347360175,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60
50fe65-0fb2-4d13-ac30-866b9a82057d,},Annotations:map[string]string{io.kubernetes.container.hash: 34eb2dc4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7de2d2d59a48346e42432e4141d8de37c811e8cb345677cc8f573fb3cad0dd0e,PodSandboxId:e3cd8433a2991d05ad6e559c7c3a6d55657a4b7583eac6a3fcbad615e676b4e9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1714946903091392259,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-006416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6a8d0c9a7f18c910ad86ad36819648b,},Anno
tations:map[string]string{io.kubernetes.container.hash: 9ad9e6e7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53d40e47129fb9e9bd9f7bb8b3ce1e57233201182b7658c267a1625729d0d305,PodSandboxId:6e6edc16ccf1a6cd5f44eda4c19e62bbfd03a8acd57051e1e85044d7279edd07,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1714946903119838037,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-006416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5550da831f42579c72f42d7
f9c5e76e5,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb5679a8a663778f00487599704873f17527281810cd6d96df23f086bc15c0b9,PodSandboxId:42c40012bdf2b84b29a8d67df1101e5aa832d56bc6f920715a67f27439f804a8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1714946903037537924,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-006416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b77765ac987dc48dc31bb0e145034caa,}
,Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:590d632ae443ae032b3c23a9c792dbfc86c31c94bf70265e00bf18cff3f32767,PodSandboxId:3e154fcb95d863de0f99cdbdf4cf4a626eb1454441c530d164d3c7729266285b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1714946902965698958,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-006416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 727fff748fe3be2938dad9e77a6f0dbb,},Annotation
s:map[string]string{io.kubernetes.container.hash: 4aece6ab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eb75b78e-7918-405c-966b-a54f6be7e6c8 name=/runtime.v1.RuntimeService/ListContainers
	May 05 22:08:45 test-preload-006416 crio[701]: time="2024-05-05 22:08:45.232589712Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6f731d0d-b16d-494e-83fc-34f6bf78e9e6 name=/runtime.v1.RuntimeService/Version
	May 05 22:08:45 test-preload-006416 crio[701]: time="2024-05-05 22:08:45.232662425Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6f731d0d-b16d-494e-83fc-34f6bf78e9e6 name=/runtime.v1.RuntimeService/Version
	May 05 22:08:45 test-preload-006416 crio[701]: time="2024-05-05 22:08:45.234026157Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7e461965-a8b3-424c-a6c4-24d08c8b8e32 name=/runtime.v1.ImageService/ImageFsInfo
	May 05 22:08:45 test-preload-006416 crio[701]: time="2024-05-05 22:08:45.234643809Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714946925234619475,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7e461965-a8b3-424c-a6c4-24d08c8b8e32 name=/runtime.v1.ImageService/ImageFsInfo
	May 05 22:08:45 test-preload-006416 crio[701]: time="2024-05-05 22:08:45.235177284Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=77983181-849e-4424-be08-961fec29db7b name=/runtime.v1.RuntimeService/ListContainers
	May 05 22:08:45 test-preload-006416 crio[701]: time="2024-05-05 22:08:45.235233417Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=77983181-849e-4424-be08-961fec29db7b name=/runtime.v1.RuntimeService/ListContainers
	May 05 22:08:45 test-preload-006416 crio[701]: time="2024-05-05 22:08:45.235451088Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f8fbf303bb90cb68576a0dbe8726c2ff1339f368313bce5f48ec0b4640a285bf,PodSandboxId:3a1ff57ba8ace18ddc1ad608d0430375692ecfebdaf8d2dc5962d043f0ca8c6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1714946916553732581,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-grf5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbaf103d-8410-4e62-b7d3-b20e3acc5190,},Annotations:map[string]string{io.kubernetes.container.hash: 659e9c27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d240d9db21cf26a68192a691ec7be7aea04ed95eec56583a1b6587af90318169,PodSandboxId:7c60e786b19644b5557182022ff78b36023e63c3b29d460355924ef24ed56c2e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1714946909381826492,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x8vpw,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: c645af09-cad1-412e-a26b-7f1d7afe8240,},Annotations:map[string]string{io.kubernetes.container.hash: 86ead6ad,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:463efd331db1d714e8364c67daf084e41b9bd8748cce22c32ee2a0ee13bf9e0d,PodSandboxId:d524fce451c8a6c8b8a63baa73b7d8b6b9eadf53ecf98a6c8892dcf5cd75feb7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714946909347360175,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60
50fe65-0fb2-4d13-ac30-866b9a82057d,},Annotations:map[string]string{io.kubernetes.container.hash: 34eb2dc4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7de2d2d59a48346e42432e4141d8de37c811e8cb345677cc8f573fb3cad0dd0e,PodSandboxId:e3cd8433a2991d05ad6e559c7c3a6d55657a4b7583eac6a3fcbad615e676b4e9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1714946903091392259,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-006416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6a8d0c9a7f18c910ad86ad36819648b,},Anno
tations:map[string]string{io.kubernetes.container.hash: 9ad9e6e7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53d40e47129fb9e9bd9f7bb8b3ce1e57233201182b7658c267a1625729d0d305,PodSandboxId:6e6edc16ccf1a6cd5f44eda4c19e62bbfd03a8acd57051e1e85044d7279edd07,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1714946903119838037,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-006416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5550da831f42579c72f42d7
f9c5e76e5,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb5679a8a663778f00487599704873f17527281810cd6d96df23f086bc15c0b9,PodSandboxId:42c40012bdf2b84b29a8d67df1101e5aa832d56bc6f920715a67f27439f804a8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1714946903037537924,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-006416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b77765ac987dc48dc31bb0e145034caa,}
,Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:590d632ae443ae032b3c23a9c792dbfc86c31c94bf70265e00bf18cff3f32767,PodSandboxId:3e154fcb95d863de0f99cdbdf4cf4a626eb1454441c530d164d3c7729266285b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1714946902965698958,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-006416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 727fff748fe3be2938dad9e77a6f0dbb,},Annotation
s:map[string]string{io.kubernetes.container.hash: 4aece6ab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=77983181-849e-4424-be08-961fec29db7b name=/runtime.v1.RuntimeService/ListContainers
	May 05 22:08:45 test-preload-006416 crio[701]: time="2024-05-05 22:08:45.277251745Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9ae20704-e7a7-4c57-a8ab-0a976a5715ae name=/runtime.v1.RuntimeService/Version
	May 05 22:08:45 test-preload-006416 crio[701]: time="2024-05-05 22:08:45.277515417Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9ae20704-e7a7-4c57-a8ab-0a976a5715ae name=/runtime.v1.RuntimeService/Version
	May 05 22:08:45 test-preload-006416 crio[701]: time="2024-05-05 22:08:45.280611853Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f5d2a864-c61e-4015-8427-52948d4b4094 name=/runtime.v1.ImageService/ImageFsInfo
	May 05 22:08:45 test-preload-006416 crio[701]: time="2024-05-05 22:08:45.281073999Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714946925281051652,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f5d2a864-c61e-4015-8427-52948d4b4094 name=/runtime.v1.ImageService/ImageFsInfo
	May 05 22:08:45 test-preload-006416 crio[701]: time="2024-05-05 22:08:45.281742521Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=558579bf-9976-46db-82d1-bdab6f73cf25 name=/runtime.v1.RuntimeService/ListContainers
	May 05 22:08:45 test-preload-006416 crio[701]: time="2024-05-05 22:08:45.281810811Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=558579bf-9976-46db-82d1-bdab6f73cf25 name=/runtime.v1.RuntimeService/ListContainers
	May 05 22:08:45 test-preload-006416 crio[701]: time="2024-05-05 22:08:45.281969589Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f8fbf303bb90cb68576a0dbe8726c2ff1339f368313bce5f48ec0b4640a285bf,PodSandboxId:3a1ff57ba8ace18ddc1ad608d0430375692ecfebdaf8d2dc5962d043f0ca8c6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1714946916553732581,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-grf5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbaf103d-8410-4e62-b7d3-b20e3acc5190,},Annotations:map[string]string{io.kubernetes.container.hash: 659e9c27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d240d9db21cf26a68192a691ec7be7aea04ed95eec56583a1b6587af90318169,PodSandboxId:7c60e786b19644b5557182022ff78b36023e63c3b29d460355924ef24ed56c2e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1714946909381826492,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x8vpw,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: c645af09-cad1-412e-a26b-7f1d7afe8240,},Annotations:map[string]string{io.kubernetes.container.hash: 86ead6ad,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:463efd331db1d714e8364c67daf084e41b9bd8748cce22c32ee2a0ee13bf9e0d,PodSandboxId:d524fce451c8a6c8b8a63baa73b7d8b6b9eadf53ecf98a6c8892dcf5cd75feb7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714946909347360175,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60
50fe65-0fb2-4d13-ac30-866b9a82057d,},Annotations:map[string]string{io.kubernetes.container.hash: 34eb2dc4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7de2d2d59a48346e42432e4141d8de37c811e8cb345677cc8f573fb3cad0dd0e,PodSandboxId:e3cd8433a2991d05ad6e559c7c3a6d55657a4b7583eac6a3fcbad615e676b4e9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1714946903091392259,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-006416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6a8d0c9a7f18c910ad86ad36819648b,},Anno
tations:map[string]string{io.kubernetes.container.hash: 9ad9e6e7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53d40e47129fb9e9bd9f7bb8b3ce1e57233201182b7658c267a1625729d0d305,PodSandboxId:6e6edc16ccf1a6cd5f44eda4c19e62bbfd03a8acd57051e1e85044d7279edd07,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1714946903119838037,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-006416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5550da831f42579c72f42d7
f9c5e76e5,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb5679a8a663778f00487599704873f17527281810cd6d96df23f086bc15c0b9,PodSandboxId:42c40012bdf2b84b29a8d67df1101e5aa832d56bc6f920715a67f27439f804a8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1714946903037537924,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-006416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b77765ac987dc48dc31bb0e145034caa,}
,Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:590d632ae443ae032b3c23a9c792dbfc86c31c94bf70265e00bf18cff3f32767,PodSandboxId:3e154fcb95d863de0f99cdbdf4cf4a626eb1454441c530d164d3c7729266285b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1714946902965698958,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-006416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 727fff748fe3be2938dad9e77a6f0dbb,},Annotation
s:map[string]string{io.kubernetes.container.hash: 4aece6ab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=558579bf-9976-46db-82d1-bdab6f73cf25 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f8fbf303bb90c       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   8 seconds ago       Running             coredns                   1                   3a1ff57ba8ace       coredns-6d4b75cb6d-grf5s
	d240d9db21cf2       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   15 seconds ago      Running             kube-proxy                1                   7c60e786b1964       kube-proxy-x8vpw
	463efd331db1d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 seconds ago      Running             storage-provisioner       1                   d524fce451c8a       storage-provisioner
	53d40e47129fb       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   22 seconds ago      Running             kube-controller-manager   1                   6e6edc16ccf1a       kube-controller-manager-test-preload-006416
	7de2d2d59a483       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   22 seconds ago      Running             etcd                      1                   e3cd8433a2991       etcd-test-preload-006416
	fb5679a8a6637       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   22 seconds ago      Running             kube-scheduler            1                   42c40012bdf2b       kube-scheduler-test-preload-006416
	590d632ae443a       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   22 seconds ago      Running             kube-apiserver            1                   3e154fcb95d86       kube-apiserver-test-preload-006416
	
	
	==> coredns [f8fbf303bb90cb68576a0dbe8726c2ff1339f368313bce5f48ec0b4640a285bf] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:58003 - 40162 "HINFO IN 7181326343270282466.7248455674624680906. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014758499s
	
	
	==> describe nodes <==
	Name:               test-preload-006416
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-006416
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=182cbbc99574885c654f8e32902368a71f76ddd3
	                    minikube.k8s.io/name=test-preload-006416
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_05T22_07_01_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 05 May 2024 22:06:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-006416
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 05 May 2024 22:08:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 05 May 2024 22:08:37 +0000   Sun, 05 May 2024 22:06:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 05 May 2024 22:08:37 +0000   Sun, 05 May 2024 22:06:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 05 May 2024 22:08:37 +0000   Sun, 05 May 2024 22:06:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 05 May 2024 22:08:37 +0000   Sun, 05 May 2024 22:08:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.118
	  Hostname:    test-preload-006416
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e15945bd57cd476ca805c69deeec7bdd
	  System UUID:                e15945bd-57cd-476c-a805-c69deeec7bdd
	  Boot ID:                    9e75f835-b689-4323-84c8-3d5b748da30c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-grf5s                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     92s
	  kube-system                 etcd-test-preload-006416                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         104s
	  kube-system                 kube-apiserver-test-preload-006416             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                 kube-controller-manager-test-preload-006416    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                 kube-proxy-x8vpw                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	  kube-system                 kube-scheduler-test-preload-006416             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         106s
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         90s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15s                kube-proxy       
	  Normal  Starting                 90s                kube-proxy       
	  Normal  Starting                 104s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  104s               kubelet          Node test-preload-006416 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    104s               kubelet          Node test-preload-006416 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     104s               kubelet          Node test-preload-006416 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  104s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                94s                kubelet          Node test-preload-006416 status is now: NodeReady
	  Normal  RegisteredNode           93s                node-controller  Node test-preload-006416 event: Registered Node test-preload-006416 in Controller
	  Normal  Starting                 23s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23s (x8 over 23s)  kubelet          Node test-preload-006416 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x8 over 23s)  kubelet          Node test-preload-006416 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x7 over 23s)  kubelet          Node test-preload-006416 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6s                 node-controller  Node test-preload-006416 event: Registered Node test-preload-006416 in Controller
	
	
	==> dmesg <==
	[May 5 22:07] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051931] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043098] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.642716] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.472368] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.693579] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[May 5 22:08] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.061418] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067063] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.208666] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.162532] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +0.286904] systemd-fstab-generator[686]: Ignoring "noauto" option for root device
	[ +12.895497] systemd-fstab-generator[959]: Ignoring "noauto" option for root device
	[  +0.065845] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.013615] systemd-fstab-generator[1090]: Ignoring "noauto" option for root device
	[  +4.025318] kauditd_printk_skb: 105 callbacks suppressed
	[  +4.654121] systemd-fstab-generator[1726]: Ignoring "noauto" option for root device
	[  +5.610143] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [7de2d2d59a48346e42432e4141d8de37c811e8cb345677cc8f573fb3cad0dd0e] <==
	{"level":"info","ts":"2024-05-05T22:08:23.749Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"86c29206b457f123","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-05-05T22:08:23.753Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-05-05T22:08:23.758Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86c29206b457f123 switched to configuration voters=(9710484304057332003)"}
	{"level":"info","ts":"2024-05-05T22:08:23.758Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"56e4fbef5627b38f","local-member-id":"86c29206b457f123","added-peer-id":"86c29206b457f123","added-peer-peer-urls":["https://192.168.39.118:2380"]}
	{"level":"info","ts":"2024-05-05T22:08:23.760Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"56e4fbef5627b38f","local-member-id":"86c29206b457f123","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-05T22:08:23.761Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-05T22:08:23.772Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.118:2380"}
	{"level":"info","ts":"2024-05-05T22:08:23.772Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.118:2380"}
	{"level":"info","ts":"2024-05-05T22:08:23.772Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-05T22:08:23.772Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"86c29206b457f123","initial-advertise-peer-urls":["https://192.168.39.118:2380"],"listen-peer-urls":["https://192.168.39.118:2380"],"advertise-client-urls":["https://192.168.39.118:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.118:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-05T22:08:23.775Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-05T22:08:24.687Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86c29206b457f123 is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-05T22:08:24.687Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86c29206b457f123 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-05T22:08:24.687Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86c29206b457f123 received MsgPreVoteResp from 86c29206b457f123 at term 2"}
	{"level":"info","ts":"2024-05-05T22:08:24.687Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86c29206b457f123 became candidate at term 3"}
	{"level":"info","ts":"2024-05-05T22:08:24.687Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86c29206b457f123 received MsgVoteResp from 86c29206b457f123 at term 3"}
	{"level":"info","ts":"2024-05-05T22:08:24.687Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86c29206b457f123 became leader at term 3"}
	{"level":"info","ts":"2024-05-05T22:08:24.687Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 86c29206b457f123 elected leader 86c29206b457f123 at term 3"}
	{"level":"info","ts":"2024-05-05T22:08:24.687Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"86c29206b457f123","local-member-attributes":"{Name:test-preload-006416 ClientURLs:[https://192.168.39.118:2379]}","request-path":"/0/members/86c29206b457f123/attributes","cluster-id":"56e4fbef5627b38f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-05T22:08:24.688Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-05T22:08:24.691Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-05T22:08:24.698Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-05T22:08:24.698Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-05T22:08:24.693Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-05T22:08:24.709Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.118:2379"}
	
	
	==> kernel <==
	 22:08:45 up 0 min,  0 users,  load average: 0.67, 0.18, 0.06
	Linux test-preload-006416 5.10.207 #1 SMP Tue Apr 30 22:38:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [590d632ae443ae032b3c23a9c792dbfc86c31c94bf70265e00bf18cff3f32767] <==
	I0505 22:08:27.241780       1 controller.go:85] Starting OpenAPI V3 controller
	I0505 22:08:27.241832       1 naming_controller.go:291] Starting NamingConditionController
	I0505 22:08:27.242223       1 establishing_controller.go:76] Starting EstablishingController
	I0505 22:08:27.242365       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0505 22:08:27.242414       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0505 22:08:27.243806       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0505 22:08:27.339870       1 shared_informer.go:262] Caches are synced for crd-autoregister
	E0505 22:08:27.342170       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0505 22:08:27.384102       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0505 22:08:27.395806       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0505 22:08:27.396113       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0505 22:08:27.398053       1 cache.go:39] Caches are synced for autoregister controller
	I0505 22:08:27.398330       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0505 22:08:27.428898       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0505 22:08:27.436987       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0505 22:08:27.848816       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0505 22:08:28.203909       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0505 22:08:29.094705       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0505 22:08:29.106685       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0505 22:08:29.140806       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0505 22:08:29.171892       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0505 22:08:29.191227       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0505 22:08:29.803147       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0505 22:08:39.899944       1 controller.go:611] quota admission added evaluator for: endpoints
	I0505 22:08:39.938550       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [53d40e47129fb9e9bd9f7bb8b3ce1e57233201182b7658c267a1625729d0d305] <==
	W0505 22:08:39.802622       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-006416. Assuming now as a timestamp.
	I0505 22:08:39.802691       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0505 22:08:39.803641       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0505 22:08:39.803699       1 shared_informer.go:262] Caches are synced for expand
	I0505 22:08:39.802508       1 shared_informer.go:262] Caches are synced for deployment
	I0505 22:08:39.802521       1 shared_informer.go:262] Caches are synced for job
	I0505 22:08:39.807551       1 shared_informer.go:262] Caches are synced for namespace
	I0505 22:08:39.810588       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I0505 22:08:39.812824       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0505 22:08:39.812968       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0505 22:08:39.813029       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0505 22:08:39.813128       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0505 22:08:39.818857       1 shared_informer.go:262] Caches are synced for crt configmap
	I0505 22:08:39.870060       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0505 22:08:39.887086       1 shared_informer.go:262] Caches are synced for endpoint
	I0505 22:08:39.913223       1 shared_informer.go:262] Caches are synced for resource quota
	I0505 22:08:39.925927       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0505 22:08:39.936315       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0505 22:08:39.964527       1 shared_informer.go:262] Caches are synced for resource quota
	I0505 22:08:39.977022       1 shared_informer.go:262] Caches are synced for disruption
	I0505 22:08:39.977072       1 disruption.go:371] Sending events to api server.
	I0505 22:08:40.053253       1 shared_informer.go:262] Caches are synced for attach detach
	I0505 22:08:40.450581       1 shared_informer.go:262] Caches are synced for garbage collector
	I0505 22:08:40.501638       1 shared_informer.go:262] Caches are synced for garbage collector
	I0505 22:08:40.501683       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [d240d9db21cf26a68192a691ec7be7aea04ed95eec56583a1b6587af90318169] <==
	I0505 22:08:29.744385       1 node.go:163] Successfully retrieved node IP: 192.168.39.118
	I0505 22:08:29.744467       1 server_others.go:138] "Detected node IP" address="192.168.39.118"
	I0505 22:08:29.744563       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0505 22:08:29.788694       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0505 22:08:29.788735       1 server_others.go:206] "Using iptables Proxier"
	I0505 22:08:29.789366       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0505 22:08:29.790120       1 server.go:661] "Version info" version="v1.24.4"
	I0505 22:08:29.790158       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0505 22:08:29.791811       1 config.go:317] "Starting service config controller"
	I0505 22:08:29.791858       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0505 22:08:29.791876       1 config.go:226] "Starting endpoint slice config controller"
	I0505 22:08:29.791880       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0505 22:08:29.792731       1 config.go:444] "Starting node config controller"
	I0505 22:08:29.792785       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0505 22:08:29.892397       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0505 22:08:29.892539       1 shared_informer.go:262] Caches are synced for service config
	I0505 22:08:29.892827       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [fb5679a8a663778f00487599704873f17527281810cd6d96df23f086bc15c0b9] <==
	I0505 22:08:24.576649       1 serving.go:348] Generated self-signed cert in-memory
	W0505 22:08:27.302018       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0505 22:08:27.302156       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0505 22:08:27.302257       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0505 22:08:27.302478       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0505 22:08:27.353091       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0505 22:08:27.353162       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0505 22:08:27.363771       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0505 22:08:27.366132       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0505 22:08:27.366071       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0505 22:08:27.366086       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0505 22:08:27.467474       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 05 22:08:28 test-preload-006416 kubelet[1097]: I0505 22:08:28.295215    1097 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75x89\" (UniqueName: \"kubernetes.io/projected/6050fe65-0fb2-4d13-ac30-866b9a82057d-kube-api-access-75x89\") pod \"storage-provisioner\" (UID: \"6050fe65-0fb2-4d13-ac30-866b9a82057d\") " pod="kube-system/storage-provisioner"
	May 05 22:08:28 test-preload-006416 kubelet[1097]: I0505 22:08:28.295251    1097 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c645af09-cad1-412e-a26b-7f1d7afe8240-kube-proxy\") pod \"kube-proxy-x8vpw\" (UID: \"c645af09-cad1-412e-a26b-7f1d7afe8240\") " pod="kube-system/kube-proxy-x8vpw"
	May 05 22:08:28 test-preload-006416 kubelet[1097]: I0505 22:08:28.295270    1097 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6050fe65-0fb2-4d13-ac30-866b9a82057d-tmp\") pod \"storage-provisioner\" (UID: \"6050fe65-0fb2-4d13-ac30-866b9a82057d\") " pod="kube-system/storage-provisioner"
	May 05 22:08:28 test-preload-006416 kubelet[1097]: I0505 22:08:28.295367    1097 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c645af09-cad1-412e-a26b-7f1d7afe8240-xtables-lock\") pod \"kube-proxy-x8vpw\" (UID: \"c645af09-cad1-412e-a26b-7f1d7afe8240\") " pod="kube-system/kube-proxy-x8vpw"
	May 05 22:08:28 test-preload-006416 kubelet[1097]: I0505 22:08:28.295387    1097 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c645af09-cad1-412e-a26b-7f1d7afe8240-lib-modules\") pod \"kube-proxy-x8vpw\" (UID: \"c645af09-cad1-412e-a26b-7f1d7afe8240\") " pod="kube-system/kube-proxy-x8vpw"
	May 05 22:08:28 test-preload-006416 kubelet[1097]: I0505 22:08:28.295406    1097 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bbaf103d-8410-4e62-b7d3-b20e3acc5190-config-volume\") pod \"coredns-6d4b75cb6d-grf5s\" (UID: \"bbaf103d-8410-4e62-b7d3-b20e3acc5190\") " pod="kube-system/coredns-6d4b75cb6d-grf5s"
	May 05 22:08:28 test-preload-006416 kubelet[1097]: I0505 22:08:28.295418    1097 reconciler.go:159] "Reconciler: start to sync state"
	May 05 22:08:28 test-preload-006416 kubelet[1097]: I0505 22:08:28.537765    1097 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d2d8224-a416-4a72-92e8-d3f7bb666d69-config-volume\") pod \"4d2d8224-a416-4a72-92e8-d3f7bb666d69\" (UID: \"4d2d8224-a416-4a72-92e8-d3f7bb666d69\") "
	May 05 22:08:28 test-preload-006416 kubelet[1097]: I0505 22:08:28.537948    1097 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p95ff\" (UniqueName: \"kubernetes.io/projected/4d2d8224-a416-4a72-92e8-d3f7bb666d69-kube-api-access-p95ff\") pod \"4d2d8224-a416-4a72-92e8-d3f7bb666d69\" (UID: \"4d2d8224-a416-4a72-92e8-d3f7bb666d69\") "
	May 05 22:08:28 test-preload-006416 kubelet[1097]: E0505 22:08:28.538777    1097 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	May 05 22:08:28 test-preload-006416 kubelet[1097]: E0505 22:08:28.538891    1097 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/bbaf103d-8410-4e62-b7d3-b20e3acc5190-config-volume podName:bbaf103d-8410-4e62-b7d3-b20e3acc5190 nodeName:}" failed. No retries permitted until 2024-05-05 22:08:29.03886853 +0000 UTC m=+6.939301836 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/bbaf103d-8410-4e62-b7d3-b20e3acc5190-config-volume") pod "coredns-6d4b75cb6d-grf5s" (UID: "bbaf103d-8410-4e62-b7d3-b20e3acc5190") : object "kube-system"/"coredns" not registered
	May 05 22:08:28 test-preload-006416 kubelet[1097]: W0505 22:08:28.539540    1097 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/4d2d8224-a416-4a72-92e8-d3f7bb666d69/volumes/kubernetes.io~projected/kube-api-access-p95ff: clearQuota called, but quotas disabled
	May 05 22:08:28 test-preload-006416 kubelet[1097]: W0505 22:08:28.539636    1097 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/4d2d8224-a416-4a72-92e8-d3f7bb666d69/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	May 05 22:08:28 test-preload-006416 kubelet[1097]: I0505 22:08:28.539855    1097 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d2d8224-a416-4a72-92e8-d3f7bb666d69-kube-api-access-p95ff" (OuterVolumeSpecName: "kube-api-access-p95ff") pod "4d2d8224-a416-4a72-92e8-d3f7bb666d69" (UID: "4d2d8224-a416-4a72-92e8-d3f7bb666d69"). InnerVolumeSpecName "kube-api-access-p95ff". PluginName "kubernetes.io/projected", VolumeGidValue ""
	May 05 22:08:28 test-preload-006416 kubelet[1097]: I0505 22:08:28.540254    1097 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d2d8224-a416-4a72-92e8-d3f7bb666d69-config-volume" (OuterVolumeSpecName: "config-volume") pod "4d2d8224-a416-4a72-92e8-d3f7bb666d69" (UID: "4d2d8224-a416-4a72-92e8-d3f7bb666d69"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	May 05 22:08:28 test-preload-006416 kubelet[1097]: I0505 22:08:28.639245    1097 reconciler.go:384] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d2d8224-a416-4a72-92e8-d3f7bb666d69-config-volume\") on node \"test-preload-006416\" DevicePath \"\""
	May 05 22:08:28 test-preload-006416 kubelet[1097]: I0505 22:08:28.639333    1097 reconciler.go:384] "Volume detached for volume \"kube-api-access-p95ff\" (UniqueName: \"kubernetes.io/projected/4d2d8224-a416-4a72-92e8-d3f7bb666d69-kube-api-access-p95ff\") on node \"test-preload-006416\" DevicePath \"\""
	May 05 22:08:29 test-preload-006416 kubelet[1097]: E0505 22:08:29.043096    1097 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	May 05 22:08:29 test-preload-006416 kubelet[1097]: E0505 22:08:29.043188    1097 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/bbaf103d-8410-4e62-b7d3-b20e3acc5190-config-volume podName:bbaf103d-8410-4e62-b7d3-b20e3acc5190 nodeName:}" failed. No retries permitted until 2024-05-05 22:08:30.043173562 +0000 UTC m=+7.943606853 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/bbaf103d-8410-4e62-b7d3-b20e3acc5190-config-volume") pod "coredns-6d4b75cb6d-grf5s" (UID: "bbaf103d-8410-4e62-b7d3-b20e3acc5190") : object "kube-system"/"coredns" not registered
	May 05 22:08:30 test-preload-006416 kubelet[1097]: E0505 22:08:30.050814    1097 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	May 05 22:08:30 test-preload-006416 kubelet[1097]: E0505 22:08:30.050914    1097 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/bbaf103d-8410-4e62-b7d3-b20e3acc5190-config-volume podName:bbaf103d-8410-4e62-b7d3-b20e3acc5190 nodeName:}" failed. No retries permitted until 2024-05-05 22:08:32.050896388 +0000 UTC m=+9.951329691 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/bbaf103d-8410-4e62-b7d3-b20e3acc5190-config-volume") pod "coredns-6d4b75cb6d-grf5s" (UID: "bbaf103d-8410-4e62-b7d3-b20e3acc5190") : object "kube-system"/"coredns" not registered
	May 05 22:08:30 test-preload-006416 kubelet[1097]: E0505 22:08:30.376950    1097 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-grf5s" podUID=bbaf103d-8410-4e62-b7d3-b20e3acc5190
	May 05 22:08:30 test-preload-006416 kubelet[1097]: I0505 22:08:30.386857    1097 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=4d2d8224-a416-4a72-92e8-d3f7bb666d69 path="/var/lib/kubelet/pods/4d2d8224-a416-4a72-92e8-d3f7bb666d69/volumes"
	May 05 22:08:32 test-preload-006416 kubelet[1097]: E0505 22:08:32.069406    1097 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	May 05 22:08:32 test-preload-006416 kubelet[1097]: E0505 22:08:32.069515    1097 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/bbaf103d-8410-4e62-b7d3-b20e3acc5190-config-volume podName:bbaf103d-8410-4e62-b7d3-b20e3acc5190 nodeName:}" failed. No retries permitted until 2024-05-05 22:08:36.069493522 +0000 UTC m=+13.969926814 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/bbaf103d-8410-4e62-b7d3-b20e3acc5190-config-volume") pod "coredns-6d4b75cb6d-grf5s" (UID: "bbaf103d-8410-4e62-b7d3-b20e3acc5190") : object "kube-system"/"coredns" not registered
	
	
	==> storage-provisioner [463efd331db1d714e8364c67daf084e41b9bd8748cce22c32ee2a0ee13bf9e0d] <==
	I0505 22:08:29.603869       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-006416 -n test-preload-006416
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-006416 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-006416" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-006416
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-006416: (1.119431802s)
--- FAIL: TestPreload (265.81s)

                                                
                                    
x
+
TestKubernetesUpgrade (1237.98s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-131082 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-131082 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m32.900286055s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-131082] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18602
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18602-11466/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18602-11466/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-131082" primary control-plane node in "kubernetes-upgrade-131082" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 22:10:46.978927   54943 out.go:291] Setting OutFile to fd 1 ...
	I0505 22:10:46.979036   54943 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 22:10:46.979045   54943 out.go:304] Setting ErrFile to fd 2...
	I0505 22:10:46.979049   54943 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 22:10:46.979264   54943 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18602-11466/.minikube/bin
	I0505 22:10:46.980091   54943 out.go:298] Setting JSON to false
	I0505 22:10:46.981025   54943 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6794,"bootTime":1714940253,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0505 22:10:46.981082   54943 start.go:139] virtualization: kvm guest
	I0505 22:10:46.983521   54943 out.go:177] * [kubernetes-upgrade-131082] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0505 22:10:46.985172   54943 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 22:10:46.985130   54943 notify.go:220] Checking for updates...
	I0505 22:10:46.990027   54943 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 22:10:46.992791   54943 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18602-11466/kubeconfig
	I0505 22:10:46.994360   54943 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18602-11466/.minikube
	I0505 22:10:46.996123   54943 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0505 22:10:46.998546   54943 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 22:10:47.000238   54943 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 22:10:47.037697   54943 out.go:177] * Using the kvm2 driver based on user configuration
	I0505 22:10:47.039081   54943 start.go:297] selected driver: kvm2
	I0505 22:10:47.039104   54943 start.go:901] validating driver "kvm2" against <nil>
	I0505 22:10:47.039119   54943 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 22:10:47.040115   54943 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 22:10:47.040252   54943 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18602-11466/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0505 22:10:47.056573   54943 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0505 22:10:47.056636   54943 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0505 22:10:47.056890   54943 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0505 22:10:47.056953   54943 cni.go:84] Creating CNI manager for ""
	I0505 22:10:47.056971   54943 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0505 22:10:47.056982   54943 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0505 22:10:47.057052   54943 start.go:340] cluster config:
	{Name:kubernetes-upgrade-131082 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-131082 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 22:10:47.057175   54943 iso.go:125] acquiring lock: {Name:mk05a37c5afd8d748706c016c1665e3846f7161e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 22:10:47.059069   54943 out.go:177] * Starting "kubernetes-upgrade-131082" primary control-plane node in "kubernetes-upgrade-131082" cluster
	I0505 22:10:47.060444   54943 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0505 22:10:47.060491   54943 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18602-11466/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0505 22:10:47.060504   54943 cache.go:56] Caching tarball of preloaded images
	I0505 22:10:47.060582   54943 preload.go:173] Found /home/jenkins/minikube-integration/18602-11466/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0505 22:10:47.060596   54943 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0505 22:10:47.061012   54943 profile.go:143] Saving config to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/kubernetes-upgrade-131082/config.json ...
	I0505 22:10:47.061042   54943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/kubernetes-upgrade-131082/config.json: {Name:mk3dfde32492834f40e4714306d8feaf2cd3873e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 22:10:47.061192   54943 start.go:360] acquireMachinesLock for kubernetes-upgrade-131082: {Name:mk7f653ea73351d572a4896c6b37d2e7aa44b4ac Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 22:10:47.061226   54943 start.go:364] duration metric: took 15.429µs to acquireMachinesLock for "kubernetes-upgrade-131082"
	I0505 22:10:47.061247   54943 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-131082 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-131082 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0505 22:10:47.061300   54943 start.go:125] createHost starting for "" (driver="kvm2")
	I0505 22:10:47.063103   54943 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0505 22:10:47.063237   54943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 22:10:47.063299   54943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 22:10:47.078242   54943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40465
	I0505 22:10:47.078636   54943 main.go:141] libmachine: () Calling .GetVersion
	I0505 22:10:47.079238   54943 main.go:141] libmachine: Using API Version  1
	I0505 22:10:47.079259   54943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 22:10:47.079699   54943 main.go:141] libmachine: () Calling .GetMachineName
	I0505 22:10:47.079917   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetMachineName
	I0505 22:10:47.080079   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .DriverName
	I0505 22:10:47.080226   54943 start.go:159] libmachine.API.Create for "kubernetes-upgrade-131082" (driver="kvm2")
	I0505 22:10:47.080248   54943 client.go:168] LocalClient.Create starting
	I0505 22:10:47.080274   54943 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem
	I0505 22:10:47.080305   54943 main.go:141] libmachine: Decoding PEM data...
	I0505 22:10:47.080321   54943 main.go:141] libmachine: Parsing certificate...
	I0505 22:10:47.080372   54943 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem
	I0505 22:10:47.080389   54943 main.go:141] libmachine: Decoding PEM data...
	I0505 22:10:47.080402   54943 main.go:141] libmachine: Parsing certificate...
	I0505 22:10:47.080417   54943 main.go:141] libmachine: Running pre-create checks...
	I0505 22:10:47.080431   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .PreCreateCheck
	I0505 22:10:47.080938   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetConfigRaw
	I0505 22:10:47.081384   54943 main.go:141] libmachine: Creating machine...
	I0505 22:10:47.081402   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .Create
	I0505 22:10:47.081538   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Creating KVM machine...
	I0505 22:10:47.083055   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | found existing default KVM network
	I0505 22:10:47.084117   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | I0505 22:10:47.083844   55004 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012d960}
	I0505 22:10:47.084149   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | created network xml: 
	I0505 22:10:47.084165   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | <network>
	I0505 22:10:47.084177   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG |   <name>mk-kubernetes-upgrade-131082</name>
	I0505 22:10:47.084193   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG |   <dns enable='no'/>
	I0505 22:10:47.084207   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG |   
	I0505 22:10:47.084223   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0505 22:10:47.084242   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG |     <dhcp>
	I0505 22:10:47.084258   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0505 22:10:47.084271   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG |     </dhcp>
	I0505 22:10:47.084282   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG |   </ip>
	I0505 22:10:47.084306   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG |   
	I0505 22:10:47.084331   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | </network>
	I0505 22:10:47.084346   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | 
	I0505 22:10:47.089893   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | trying to create private KVM network mk-kubernetes-upgrade-131082 192.168.39.0/24...
	I0505 22:10:47.166587   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | private KVM network mk-kubernetes-upgrade-131082 192.168.39.0/24 created
	I0505 22:10:47.166639   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Setting up store path in /home/jenkins/minikube-integration/18602-11466/.minikube/machines/kubernetes-upgrade-131082 ...
	I0505 22:10:47.166717   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | I0505 22:10:47.166589   55004 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18602-11466/.minikube
	I0505 22:10:47.166744   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Building disk image from file:///home/jenkins/minikube-integration/18602-11466/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso
	I0505 22:10:47.166770   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Downloading /home/jenkins/minikube-integration/18602-11466/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18602-11466/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso...
	I0505 22:10:47.389079   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | I0505 22:10:47.388963   55004 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/kubernetes-upgrade-131082/id_rsa...
	I0505 22:10:47.679365   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | I0505 22:10:47.679243   55004 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/kubernetes-upgrade-131082/kubernetes-upgrade-131082.rawdisk...
	I0505 22:10:47.679400   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | Writing magic tar header
	I0505 22:10:47.679436   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | Writing SSH key tar header
	I0505 22:10:47.679458   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | I0505 22:10:47.679379   55004 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18602-11466/.minikube/machines/kubernetes-upgrade-131082 ...
	I0505 22:10:47.679654   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Setting executable bit set on /home/jenkins/minikube-integration/18602-11466/.minikube/machines/kubernetes-upgrade-131082 (perms=drwx------)
	I0505 22:10:47.679697   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Setting executable bit set on /home/jenkins/minikube-integration/18602-11466/.minikube/machines (perms=drwxr-xr-x)
	I0505 22:10:47.679716   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/kubernetes-upgrade-131082
	I0505 22:10:47.679729   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18602-11466/.minikube/machines
	I0505 22:10:47.679740   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18602-11466/.minikube
	I0505 22:10:47.679763   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18602-11466
	I0505 22:10:47.679779   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Setting executable bit set on /home/jenkins/minikube-integration/18602-11466/.minikube (perms=drwxr-xr-x)
	I0505 22:10:47.679789   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0505 22:10:47.679809   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Setting executable bit set on /home/jenkins/minikube-integration/18602-11466 (perms=drwxrwxr-x)
	I0505 22:10:47.679835   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0505 22:10:47.679845   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | Checking permissions on dir: /home/jenkins
	I0505 22:10:47.679860   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | Checking permissions on dir: /home
	I0505 22:10:47.679872   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | Skipping /home - not owner
	I0505 22:10:47.679886   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0505 22:10:47.679893   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Creating domain...
	I0505 22:10:47.681045   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) define libvirt domain using xml: 
	I0505 22:10:47.681059   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) <domain type='kvm'>
	I0505 22:10:47.681089   54943 main.go:141] libmachine: (kubernetes-upgrade-131082)   <name>kubernetes-upgrade-131082</name>
	I0505 22:10:47.681104   54943 main.go:141] libmachine: (kubernetes-upgrade-131082)   <memory unit='MiB'>2200</memory>
	I0505 22:10:47.681113   54943 main.go:141] libmachine: (kubernetes-upgrade-131082)   <vcpu>2</vcpu>
	I0505 22:10:47.681131   54943 main.go:141] libmachine: (kubernetes-upgrade-131082)   <features>
	I0505 22:10:47.681144   54943 main.go:141] libmachine: (kubernetes-upgrade-131082)     <acpi/>
	I0505 22:10:47.681155   54943 main.go:141] libmachine: (kubernetes-upgrade-131082)     <apic/>
	I0505 22:10:47.681164   54943 main.go:141] libmachine: (kubernetes-upgrade-131082)     <pae/>
	I0505 22:10:47.681177   54943 main.go:141] libmachine: (kubernetes-upgrade-131082)     
	I0505 22:10:47.681190   54943 main.go:141] libmachine: (kubernetes-upgrade-131082)   </features>
	I0505 22:10:47.681202   54943 main.go:141] libmachine: (kubernetes-upgrade-131082)   <cpu mode='host-passthrough'>
	I0505 22:10:47.681211   54943 main.go:141] libmachine: (kubernetes-upgrade-131082)   
	I0505 22:10:47.681218   54943 main.go:141] libmachine: (kubernetes-upgrade-131082)   </cpu>
	I0505 22:10:47.681240   54943 main.go:141] libmachine: (kubernetes-upgrade-131082)   <os>
	I0505 22:10:47.681260   54943 main.go:141] libmachine: (kubernetes-upgrade-131082)     <type>hvm</type>
	I0505 22:10:47.681270   54943 main.go:141] libmachine: (kubernetes-upgrade-131082)     <boot dev='cdrom'/>
	I0505 22:10:47.681278   54943 main.go:141] libmachine: (kubernetes-upgrade-131082)     <boot dev='hd'/>
	I0505 22:10:47.681310   54943 main.go:141] libmachine: (kubernetes-upgrade-131082)     <bootmenu enable='no'/>
	I0505 22:10:47.681338   54943 main.go:141] libmachine: (kubernetes-upgrade-131082)   </os>
	I0505 22:10:47.681350   54943 main.go:141] libmachine: (kubernetes-upgrade-131082)   <devices>
	I0505 22:10:47.681363   54943 main.go:141] libmachine: (kubernetes-upgrade-131082)     <disk type='file' device='cdrom'>
	I0505 22:10:47.681381   54943 main.go:141] libmachine: (kubernetes-upgrade-131082)       <source file='/home/jenkins/minikube-integration/18602-11466/.minikube/machines/kubernetes-upgrade-131082/boot2docker.iso'/>
	I0505 22:10:47.681393   54943 main.go:141] libmachine: (kubernetes-upgrade-131082)       <target dev='hdc' bus='scsi'/>
	I0505 22:10:47.681427   54943 main.go:141] libmachine: (kubernetes-upgrade-131082)       <readonly/>
	I0505 22:10:47.681447   54943 main.go:141] libmachine: (kubernetes-upgrade-131082)     </disk>
	I0505 22:10:47.681458   54943 main.go:141] libmachine: (kubernetes-upgrade-131082)     <disk type='file' device='disk'>
	I0505 22:10:47.681469   54943 main.go:141] libmachine: (kubernetes-upgrade-131082)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0505 22:10:47.681489   54943 main.go:141] libmachine: (kubernetes-upgrade-131082)       <source file='/home/jenkins/minikube-integration/18602-11466/.minikube/machines/kubernetes-upgrade-131082/kubernetes-upgrade-131082.rawdisk'/>
	I0505 22:10:47.681510   54943 main.go:141] libmachine: (kubernetes-upgrade-131082)       <target dev='hda' bus='virtio'/>
	I0505 22:10:47.681522   54943 main.go:141] libmachine: (kubernetes-upgrade-131082)     </disk>
	I0505 22:10:47.681539   54943 main.go:141] libmachine: (kubernetes-upgrade-131082)     <interface type='network'>
	I0505 22:10:47.681552   54943 main.go:141] libmachine: (kubernetes-upgrade-131082)       <source network='mk-kubernetes-upgrade-131082'/>
	I0505 22:10:47.681562   54943 main.go:141] libmachine: (kubernetes-upgrade-131082)       <model type='virtio'/>
	I0505 22:10:47.681575   54943 main.go:141] libmachine: (kubernetes-upgrade-131082)     </interface>
	I0505 22:10:47.681587   54943 main.go:141] libmachine: (kubernetes-upgrade-131082)     <interface type='network'>
	I0505 22:10:47.681600   54943 main.go:141] libmachine: (kubernetes-upgrade-131082)       <source network='default'/>
	I0505 22:10:47.681612   54943 main.go:141] libmachine: (kubernetes-upgrade-131082)       <model type='virtio'/>
	I0505 22:10:47.681633   54943 main.go:141] libmachine: (kubernetes-upgrade-131082)     </interface>
	I0505 22:10:47.681655   54943 main.go:141] libmachine: (kubernetes-upgrade-131082)     <serial type='pty'>
	I0505 22:10:47.681669   54943 main.go:141] libmachine: (kubernetes-upgrade-131082)       <target port='0'/>
	I0505 22:10:47.681678   54943 main.go:141] libmachine: (kubernetes-upgrade-131082)     </serial>
	I0505 22:10:47.681684   54943 main.go:141] libmachine: (kubernetes-upgrade-131082)     <console type='pty'>
	I0505 22:10:47.681692   54943 main.go:141] libmachine: (kubernetes-upgrade-131082)       <target type='serial' port='0'/>
	I0505 22:10:47.681698   54943 main.go:141] libmachine: (kubernetes-upgrade-131082)     </console>
	I0505 22:10:47.681706   54943 main.go:141] libmachine: (kubernetes-upgrade-131082)     <rng model='virtio'>
	I0505 22:10:47.681712   54943 main.go:141] libmachine: (kubernetes-upgrade-131082)       <backend model='random'>/dev/random</backend>
	I0505 22:10:47.681719   54943 main.go:141] libmachine: (kubernetes-upgrade-131082)     </rng>
	I0505 22:10:47.681725   54943 main.go:141] libmachine: (kubernetes-upgrade-131082)     
	I0505 22:10:47.681732   54943 main.go:141] libmachine: (kubernetes-upgrade-131082)     
	I0505 22:10:47.681742   54943 main.go:141] libmachine: (kubernetes-upgrade-131082)   </devices>
	I0505 22:10:47.681749   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) </domain>
	I0505 22:10:47.681757   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) 
	I0505 22:10:47.686431   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined MAC address 52:54:00:0c:11:46 in network default
	I0505 22:10:47.686994   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Ensuring networks are active...
	I0505 22:10:47.687027   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:10:47.687777   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Ensuring network default is active
	I0505 22:10:47.688055   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Ensuring network mk-kubernetes-upgrade-131082 is active
	I0505 22:10:47.688515   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Getting domain xml...
	I0505 22:10:47.689258   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Creating domain...
	I0505 22:10:48.973554   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Waiting to get IP...
	I0505 22:10:48.974373   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:10:48.974762   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | unable to find current IP address of domain kubernetes-upgrade-131082 in network mk-kubernetes-upgrade-131082
	I0505 22:10:48.974794   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | I0505 22:10:48.974724   55004 retry.go:31] will retry after 248.147002ms: waiting for machine to come up
	I0505 22:10:49.224229   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:10:49.224650   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | unable to find current IP address of domain kubernetes-upgrade-131082 in network mk-kubernetes-upgrade-131082
	I0505 22:10:49.224688   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | I0505 22:10:49.224632   55004 retry.go:31] will retry after 239.496227ms: waiting for machine to come up
	I0505 22:10:49.466008   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:10:49.466447   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | unable to find current IP address of domain kubernetes-upgrade-131082 in network mk-kubernetes-upgrade-131082
	I0505 22:10:49.466475   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | I0505 22:10:49.466391   55004 retry.go:31] will retry after 344.003425ms: waiting for machine to come up
	I0505 22:10:49.812186   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:10:49.812700   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | unable to find current IP address of domain kubernetes-upgrade-131082 in network mk-kubernetes-upgrade-131082
	I0505 22:10:49.812732   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | I0505 22:10:49.812656   55004 retry.go:31] will retry after 565.258041ms: waiting for machine to come up
	I0505 22:10:50.379430   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:10:50.379873   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | unable to find current IP address of domain kubernetes-upgrade-131082 in network mk-kubernetes-upgrade-131082
	I0505 22:10:50.379898   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | I0505 22:10:50.379833   55004 retry.go:31] will retry after 751.659576ms: waiting for machine to come up
	I0505 22:10:51.132811   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:10:51.133292   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | unable to find current IP address of domain kubernetes-upgrade-131082 in network mk-kubernetes-upgrade-131082
	I0505 22:10:51.133319   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | I0505 22:10:51.133214   55004 retry.go:31] will retry after 662.736343ms: waiting for machine to come up
	I0505 22:10:51.797827   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:10:51.798234   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | unable to find current IP address of domain kubernetes-upgrade-131082 in network mk-kubernetes-upgrade-131082
	I0505 22:10:51.798283   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | I0505 22:10:51.798195   55004 retry.go:31] will retry after 1.103143952s: waiting for machine to come up
	I0505 22:10:52.903066   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:10:52.903595   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | unable to find current IP address of domain kubernetes-upgrade-131082 in network mk-kubernetes-upgrade-131082
	I0505 22:10:52.903626   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | I0505 22:10:52.903541   55004 retry.go:31] will retry after 1.219594534s: waiting for machine to come up
	I0505 22:10:54.124616   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:10:54.125005   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | unable to find current IP address of domain kubernetes-upgrade-131082 in network mk-kubernetes-upgrade-131082
	I0505 22:10:54.125037   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | I0505 22:10:54.124952   55004 retry.go:31] will retry after 1.347151087s: waiting for machine to come up
	I0505 22:10:55.475355   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:10:55.475854   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | unable to find current IP address of domain kubernetes-upgrade-131082 in network mk-kubernetes-upgrade-131082
	I0505 22:10:55.475967   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | I0505 22:10:55.475790   55004 retry.go:31] will retry after 1.558157168s: waiting for machine to come up
	I0505 22:10:57.035381   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:10:57.041293   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | unable to find current IP address of domain kubernetes-upgrade-131082 in network mk-kubernetes-upgrade-131082
	I0505 22:10:57.041332   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | I0505 22:10:57.035833   55004 retry.go:31] will retry after 2.05478155s: waiting for machine to come up
	I0505 22:10:59.093679   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:10:59.094340   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | unable to find current IP address of domain kubernetes-upgrade-131082 in network mk-kubernetes-upgrade-131082
	I0505 22:10:59.094368   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | I0505 22:10:59.094284   55004 retry.go:31] will retry after 3.128877427s: waiting for machine to come up
	I0505 22:11:02.225028   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:11:02.225475   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | unable to find current IP address of domain kubernetes-upgrade-131082 in network mk-kubernetes-upgrade-131082
	I0505 22:11:02.225502   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | I0505 22:11:02.225425   55004 retry.go:31] will retry after 3.93233443s: waiting for machine to come up
	I0505 22:11:06.160758   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:11:06.161222   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | unable to find current IP address of domain kubernetes-upgrade-131082 in network mk-kubernetes-upgrade-131082
	I0505 22:11:06.161260   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | I0505 22:11:06.161182   55004 retry.go:31] will retry after 5.262609889s: waiting for machine to come up
	I0505 22:11:11.425598   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:11:11.425947   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Found IP for machine: 192.168.39.41
	I0505 22:11:11.425968   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has current primary IP address 192.168.39.41 and MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:11:11.425976   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Reserving static IP address...
	I0505 22:11:11.426344   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-131082", mac: "52:54:00:19:c6:ca", ip: "192.168.39.41"} in network mk-kubernetes-upgrade-131082
	I0505 22:11:11.501763   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | Getting to WaitForSSH function...
	I0505 22:11:11.501804   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Reserved static IP address: 192.168.39.41
	I0505 22:11:11.501820   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Waiting for SSH to be available...
	I0505 22:11:11.504609   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:11:11.505093   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:c6:ca", ip: ""} in network mk-kubernetes-upgrade-131082: {Iface:virbr1 ExpiryTime:2024-05-05 23:11:03 +0000 UTC Type:0 Mac:52:54:00:19:c6:ca Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:minikube Clientid:01:52:54:00:19:c6:ca}
	I0505 22:11:11.505128   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined IP address 192.168.39.41 and MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:11:11.505295   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | Using SSH client type: external
	I0505 22:11:11.505333   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | Using SSH private key: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/kubernetes-upgrade-131082/id_rsa (-rw-------)
	I0505 22:11:11.505365   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.41 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18602-11466/.minikube/machines/kubernetes-upgrade-131082/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0505 22:11:11.505383   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | About to run SSH command:
	I0505 22:11:11.505404   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | exit 0
	I0505 22:11:11.635804   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | SSH cmd err, output: <nil>: 
	I0505 22:11:11.636080   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) KVM machine creation complete!
	I0505 22:11:11.636412   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetConfigRaw
	I0505 22:11:11.637019   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .DriverName
	I0505 22:11:11.637293   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .DriverName
	I0505 22:11:11.637456   54943 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0505 22:11:11.637473   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetState
	I0505 22:11:11.638902   54943 main.go:141] libmachine: Detecting operating system of created instance...
	I0505 22:11:11.638919   54943 main.go:141] libmachine: Waiting for SSH to be available...
	I0505 22:11:11.638926   54943 main.go:141] libmachine: Getting to WaitForSSH function...
	I0505 22:11:11.638935   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHHostname
	I0505 22:11:11.641617   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:11:11.642069   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:c6:ca", ip: ""} in network mk-kubernetes-upgrade-131082: {Iface:virbr1 ExpiryTime:2024-05-05 23:11:03 +0000 UTC Type:0 Mac:52:54:00:19:c6:ca Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:kubernetes-upgrade-131082 Clientid:01:52:54:00:19:c6:ca}
	I0505 22:11:11.642111   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined IP address 192.168.39.41 and MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:11:11.642180   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHPort
	I0505 22:11:11.642385   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHKeyPath
	I0505 22:11:11.642570   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHKeyPath
	I0505 22:11:11.642737   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHUsername
	I0505 22:11:11.642898   54943 main.go:141] libmachine: Using SSH client type: native
	I0505 22:11:11.643084   54943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I0505 22:11:11.643099   54943 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0505 22:11:11.761140   54943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0505 22:11:11.761161   54943 main.go:141] libmachine: Detecting the provisioner...
	I0505 22:11:11.761169   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHHostname
	I0505 22:11:11.763956   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:11:11.764274   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:c6:ca", ip: ""} in network mk-kubernetes-upgrade-131082: {Iface:virbr1 ExpiryTime:2024-05-05 23:11:03 +0000 UTC Type:0 Mac:52:54:00:19:c6:ca Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:kubernetes-upgrade-131082 Clientid:01:52:54:00:19:c6:ca}
	I0505 22:11:11.764319   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined IP address 192.168.39.41 and MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:11:11.764505   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHPort
	I0505 22:11:11.764720   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHKeyPath
	I0505 22:11:11.764909   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHKeyPath
	I0505 22:11:11.765085   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHUsername
	I0505 22:11:11.765266   54943 main.go:141] libmachine: Using SSH client type: native
	I0505 22:11:11.765471   54943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I0505 22:11:11.765485   54943 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0505 22:11:11.885372   54943 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0505 22:11:11.885441   54943 main.go:141] libmachine: found compatible host: buildroot
	I0505 22:11:11.885452   54943 main.go:141] libmachine: Provisioning with buildroot...
	I0505 22:11:11.885460   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetMachineName
	I0505 22:11:11.885714   54943 buildroot.go:166] provisioning hostname "kubernetes-upgrade-131082"
	I0505 22:11:11.885743   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetMachineName
	I0505 22:11:11.885916   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHHostname
	I0505 22:11:11.888648   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:11:11.889039   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:c6:ca", ip: ""} in network mk-kubernetes-upgrade-131082: {Iface:virbr1 ExpiryTime:2024-05-05 23:11:03 +0000 UTC Type:0 Mac:52:54:00:19:c6:ca Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:kubernetes-upgrade-131082 Clientid:01:52:54:00:19:c6:ca}
	I0505 22:11:11.889069   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined IP address 192.168.39.41 and MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:11:11.889241   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHPort
	I0505 22:11:11.889441   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHKeyPath
	I0505 22:11:11.889631   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHKeyPath
	I0505 22:11:11.889769   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHUsername
	I0505 22:11:11.889908   54943 main.go:141] libmachine: Using SSH client type: native
	I0505 22:11:11.890104   54943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I0505 22:11:11.890119   54943 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-131082 && echo "kubernetes-upgrade-131082" | sudo tee /etc/hostname
	I0505 22:11:12.029825   54943 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-131082
	
	I0505 22:11:12.029853   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHHostname
	I0505 22:11:12.032645   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:11:12.032995   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:c6:ca", ip: ""} in network mk-kubernetes-upgrade-131082: {Iface:virbr1 ExpiryTime:2024-05-05 23:11:03 +0000 UTC Type:0 Mac:52:54:00:19:c6:ca Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:kubernetes-upgrade-131082 Clientid:01:52:54:00:19:c6:ca}
	I0505 22:11:12.033021   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined IP address 192.168.39.41 and MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:11:12.033185   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHPort
	I0505 22:11:12.033378   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHKeyPath
	I0505 22:11:12.033544   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHKeyPath
	I0505 22:11:12.033762   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHUsername
	I0505 22:11:12.033953   54943 main.go:141] libmachine: Using SSH client type: native
	I0505 22:11:12.034121   54943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I0505 22:11:12.034138   54943 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-131082' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-131082/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-131082' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0505 22:11:12.155923   54943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0505 22:11:12.155978   54943 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18602-11466/.minikube CaCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18602-11466/.minikube}
	I0505 22:11:12.155997   54943 buildroot.go:174] setting up certificates
	I0505 22:11:12.156005   54943 provision.go:84] configureAuth start
	I0505 22:11:12.156014   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetMachineName
	I0505 22:11:12.156363   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetIP
	I0505 22:11:12.159422   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:11:12.159856   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:c6:ca", ip: ""} in network mk-kubernetes-upgrade-131082: {Iface:virbr1 ExpiryTime:2024-05-05 23:11:03 +0000 UTC Type:0 Mac:52:54:00:19:c6:ca Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:kubernetes-upgrade-131082 Clientid:01:52:54:00:19:c6:ca}
	I0505 22:11:12.159926   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined IP address 192.168.39.41 and MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:11:12.160021   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHHostname
	I0505 22:11:12.162561   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:11:12.162854   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:c6:ca", ip: ""} in network mk-kubernetes-upgrade-131082: {Iface:virbr1 ExpiryTime:2024-05-05 23:11:03 +0000 UTC Type:0 Mac:52:54:00:19:c6:ca Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:kubernetes-upgrade-131082 Clientid:01:52:54:00:19:c6:ca}
	I0505 22:11:12.162893   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined IP address 192.168.39.41 and MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:11:12.163078   54943 provision.go:143] copyHostCerts
	I0505 22:11:12.163120   54943 exec_runner.go:144] found /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem, removing ...
	I0505 22:11:12.163135   54943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem
	I0505 22:11:12.163198   54943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem (1078 bytes)
	I0505 22:11:12.163289   54943 exec_runner.go:144] found /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem, removing ...
	I0505 22:11:12.163299   54943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem
	I0505 22:11:12.163324   54943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem (1123 bytes)
	I0505 22:11:12.163385   54943 exec_runner.go:144] found /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem, removing ...
	I0505 22:11:12.163396   54943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem
	I0505 22:11:12.163418   54943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem (1675 bytes)
	I0505 22:11:12.163506   54943 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-131082 san=[127.0.0.1 192.168.39.41 kubernetes-upgrade-131082 localhost minikube]
	I0505 22:11:12.379815   54943 provision.go:177] copyRemoteCerts
	I0505 22:11:12.379870   54943 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0505 22:11:12.379891   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHHostname
	I0505 22:11:12.382760   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:11:12.383105   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:c6:ca", ip: ""} in network mk-kubernetes-upgrade-131082: {Iface:virbr1 ExpiryTime:2024-05-05 23:11:03 +0000 UTC Type:0 Mac:52:54:00:19:c6:ca Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:kubernetes-upgrade-131082 Clientid:01:52:54:00:19:c6:ca}
	I0505 22:11:12.383140   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined IP address 192.168.39.41 and MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:11:12.383299   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHPort
	I0505 22:11:12.383514   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHKeyPath
	I0505 22:11:12.383711   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHUsername
	I0505 22:11:12.383857   54943 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/kubernetes-upgrade-131082/id_rsa Username:docker}
	I0505 22:11:12.470476   54943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0505 22:11:12.497640   54943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0505 22:11:12.524325   54943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0505 22:11:12.550000   54943 provision.go:87] duration metric: took 393.98326ms to configureAuth
	I0505 22:11:12.550033   54943 buildroot.go:189] setting minikube options for container-runtime
	I0505 22:11:12.550275   54943 config.go:182] Loaded profile config "kubernetes-upgrade-131082": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0505 22:11:12.550438   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHHostname
	I0505 22:11:12.553093   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:11:12.553462   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:c6:ca", ip: ""} in network mk-kubernetes-upgrade-131082: {Iface:virbr1 ExpiryTime:2024-05-05 23:11:03 +0000 UTC Type:0 Mac:52:54:00:19:c6:ca Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:kubernetes-upgrade-131082 Clientid:01:52:54:00:19:c6:ca}
	I0505 22:11:12.553510   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined IP address 192.168.39.41 and MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:11:12.553604   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHPort
	I0505 22:11:12.553816   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHKeyPath
	I0505 22:11:12.553973   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHKeyPath
	I0505 22:11:12.554108   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHUsername
	I0505 22:11:12.554233   54943 main.go:141] libmachine: Using SSH client type: native
	I0505 22:11:12.556142   54943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I0505 22:11:12.556175   54943 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0505 22:11:12.836245   54943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0505 22:11:12.836285   54943 main.go:141] libmachine: Checking connection to Docker...
	I0505 22:11:12.836298   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetURL
	I0505 22:11:12.837752   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | Using libvirt version 6000000
	I0505 22:11:12.839983   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:11:12.840428   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:c6:ca", ip: ""} in network mk-kubernetes-upgrade-131082: {Iface:virbr1 ExpiryTime:2024-05-05 23:11:03 +0000 UTC Type:0 Mac:52:54:00:19:c6:ca Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:kubernetes-upgrade-131082 Clientid:01:52:54:00:19:c6:ca}
	I0505 22:11:12.840463   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined IP address 192.168.39.41 and MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:11:12.840601   54943 main.go:141] libmachine: Docker is up and running!
	I0505 22:11:12.840615   54943 main.go:141] libmachine: Reticulating splines...
	I0505 22:11:12.840621   54943 client.go:171] duration metric: took 25.760364193s to LocalClient.Create
	I0505 22:11:12.840640   54943 start.go:167] duration metric: took 25.760415483s to libmachine.API.Create "kubernetes-upgrade-131082"
	I0505 22:11:12.840651   54943 start.go:293] postStartSetup for "kubernetes-upgrade-131082" (driver="kvm2")
	I0505 22:11:12.840660   54943 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0505 22:11:12.840680   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .DriverName
	I0505 22:11:12.840888   54943 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0505 22:11:12.840909   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHHostname
	I0505 22:11:12.843098   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:11:12.843457   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:c6:ca", ip: ""} in network mk-kubernetes-upgrade-131082: {Iface:virbr1 ExpiryTime:2024-05-05 23:11:03 +0000 UTC Type:0 Mac:52:54:00:19:c6:ca Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:kubernetes-upgrade-131082 Clientid:01:52:54:00:19:c6:ca}
	I0505 22:11:12.843502   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined IP address 192.168.39.41 and MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:11:12.843616   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHPort
	I0505 22:11:12.843804   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHKeyPath
	I0505 22:11:12.843959   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHUsername
	I0505 22:11:12.844101   54943 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/kubernetes-upgrade-131082/id_rsa Username:docker}
	I0505 22:11:12.931089   54943 ssh_runner.go:195] Run: cat /etc/os-release
	I0505 22:11:12.936294   54943 info.go:137] Remote host: Buildroot 2023.02.9
	I0505 22:11:12.936324   54943 filesync.go:126] Scanning /home/jenkins/minikube-integration/18602-11466/.minikube/addons for local assets ...
	I0505 22:11:12.936395   54943 filesync.go:126] Scanning /home/jenkins/minikube-integration/18602-11466/.minikube/files for local assets ...
	I0505 22:11:12.936497   54943 filesync.go:149] local asset: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem -> 187982.pem in /etc/ssl/certs
	I0505 22:11:12.936609   54943 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0505 22:11:12.947117   54943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem --> /etc/ssl/certs/187982.pem (1708 bytes)
	I0505 22:11:12.974718   54943 start.go:296] duration metric: took 134.05538ms for postStartSetup
	I0505 22:11:12.974786   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetConfigRaw
	I0505 22:11:12.975400   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetIP
	I0505 22:11:12.977792   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:11:12.978129   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:c6:ca", ip: ""} in network mk-kubernetes-upgrade-131082: {Iface:virbr1 ExpiryTime:2024-05-05 23:11:03 +0000 UTC Type:0 Mac:52:54:00:19:c6:ca Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:kubernetes-upgrade-131082 Clientid:01:52:54:00:19:c6:ca}
	I0505 22:11:12.978160   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined IP address 192.168.39.41 and MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:11:12.978345   54943 profile.go:143] Saving config to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/kubernetes-upgrade-131082/config.json ...
	I0505 22:11:12.978516   54943 start.go:128] duration metric: took 25.917208586s to createHost
	I0505 22:11:12.978539   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHHostname
	I0505 22:11:12.980768   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:11:12.981103   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:c6:ca", ip: ""} in network mk-kubernetes-upgrade-131082: {Iface:virbr1 ExpiryTime:2024-05-05 23:11:03 +0000 UTC Type:0 Mac:52:54:00:19:c6:ca Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:kubernetes-upgrade-131082 Clientid:01:52:54:00:19:c6:ca}
	I0505 22:11:12.981133   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined IP address 192.168.39.41 and MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:11:12.981252   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHPort
	I0505 22:11:12.981415   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHKeyPath
	I0505 22:11:12.981575   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHKeyPath
	I0505 22:11:12.981706   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHUsername
	I0505 22:11:12.981859   54943 main.go:141] libmachine: Using SSH client type: native
	I0505 22:11:12.982076   54943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I0505 22:11:12.982094   54943 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0505 22:11:13.092696   54943 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714947073.063458451
	
	I0505 22:11:13.092722   54943 fix.go:216] guest clock: 1714947073.063458451
	I0505 22:11:13.092732   54943 fix.go:229] Guest: 2024-05-05 22:11:13.063458451 +0000 UTC Remote: 2024-05-05 22:11:12.978527921 +0000 UTC m=+26.068850079 (delta=84.93053ms)
	I0505 22:11:13.092778   54943 fix.go:200] guest clock delta is within tolerance: 84.93053ms
	I0505 22:11:13.092783   54943 start.go:83] releasing machines lock for "kubernetes-upgrade-131082", held for 26.031551275s
	I0505 22:11:13.092808   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .DriverName
	I0505 22:11:13.093084   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetIP
	I0505 22:11:13.095953   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:11:13.096357   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:c6:ca", ip: ""} in network mk-kubernetes-upgrade-131082: {Iface:virbr1 ExpiryTime:2024-05-05 23:11:03 +0000 UTC Type:0 Mac:52:54:00:19:c6:ca Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:kubernetes-upgrade-131082 Clientid:01:52:54:00:19:c6:ca}
	I0505 22:11:13.096395   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined IP address 192.168.39.41 and MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:11:13.096597   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .DriverName
	I0505 22:11:13.097048   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .DriverName
	I0505 22:11:13.097249   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .DriverName
	I0505 22:11:13.097359   54943 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0505 22:11:13.097400   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHHostname
	I0505 22:11:13.097518   54943 ssh_runner.go:195] Run: cat /version.json
	I0505 22:11:13.097540   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHHostname
	I0505 22:11:13.100236   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:11:13.100270   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:11:13.100589   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:c6:ca", ip: ""} in network mk-kubernetes-upgrade-131082: {Iface:virbr1 ExpiryTime:2024-05-05 23:11:03 +0000 UTC Type:0 Mac:52:54:00:19:c6:ca Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:kubernetes-upgrade-131082 Clientid:01:52:54:00:19:c6:ca}
	I0505 22:11:13.100617   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined IP address 192.168.39.41 and MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:11:13.100644   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:c6:ca", ip: ""} in network mk-kubernetes-upgrade-131082: {Iface:virbr1 ExpiryTime:2024-05-05 23:11:03 +0000 UTC Type:0 Mac:52:54:00:19:c6:ca Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:kubernetes-upgrade-131082 Clientid:01:52:54:00:19:c6:ca}
	I0505 22:11:13.100666   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined IP address 192.168.39.41 and MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:11:13.100778   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHPort
	I0505 22:11:13.100935   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHPort
	I0505 22:11:13.100947   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHKeyPath
	I0505 22:11:13.101100   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHKeyPath
	I0505 22:11:13.101124   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHUsername
	I0505 22:11:13.101216   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHUsername
	I0505 22:11:13.101289   54943 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/kubernetes-upgrade-131082/id_rsa Username:docker}
	I0505 22:11:13.101342   54943 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/kubernetes-upgrade-131082/id_rsa Username:docker}
	I0505 22:11:13.185111   54943 ssh_runner.go:195] Run: systemctl --version
	I0505 22:11:13.212928   54943 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0505 22:11:13.383412   54943 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0505 22:11:13.390405   54943 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0505 22:11:13.390488   54943 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0505 22:11:13.408828   54943 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0505 22:11:13.408855   54943 start.go:494] detecting cgroup driver to use...
	I0505 22:11:13.408929   54943 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0505 22:11:13.428218   54943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0505 22:11:13.443893   54943 docker.go:217] disabling cri-docker service (if available) ...
	I0505 22:11:13.443954   54943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0505 22:11:13.460217   54943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0505 22:11:13.475851   54943 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0505 22:11:13.593658   54943 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0505 22:11:13.741948   54943 docker.go:233] disabling docker service ...
	I0505 22:11:13.742019   54943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0505 22:11:13.758652   54943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0505 22:11:13.772490   54943 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0505 22:11:13.908367   54943 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0505 22:11:14.049101   54943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0505 22:11:14.064769   54943 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0505 22:11:14.085952   54943 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0505 22:11:14.086031   54943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 22:11:14.103477   54943 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0505 22:11:14.103551   54943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 22:11:14.121415   54943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 22:11:14.136112   54943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 22:11:14.148900   54943 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0505 22:11:14.161509   54943 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0505 22:11:14.172855   54943 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0505 22:11:14.172923   54943 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0505 22:11:14.189093   54943 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0505 22:11:14.200602   54943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 22:11:14.345182   54943 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0505 22:11:14.512258   54943 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0505 22:11:14.512348   54943 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0505 22:11:14.517996   54943 start.go:562] Will wait 60s for crictl version
	I0505 22:11:14.518051   54943 ssh_runner.go:195] Run: which crictl
	I0505 22:11:14.522148   54943 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0505 22:11:14.568829   54943 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0505 22:11:14.568925   54943 ssh_runner.go:195] Run: crio --version
	I0505 22:11:14.605719   54943 ssh_runner.go:195] Run: crio --version
	I0505 22:11:14.644234   54943 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0505 22:11:14.645834   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetIP
	I0505 22:11:14.648667   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:11:14.649066   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:c6:ca", ip: ""} in network mk-kubernetes-upgrade-131082: {Iface:virbr1 ExpiryTime:2024-05-05 23:11:03 +0000 UTC Type:0 Mac:52:54:00:19:c6:ca Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:kubernetes-upgrade-131082 Clientid:01:52:54:00:19:c6:ca}
	I0505 22:11:14.649094   54943 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined IP address 192.168.39.41 and MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:11:14.649312   54943 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0505 22:11:14.654289   54943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0505 22:11:14.668826   54943 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-131082 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-131082 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0505 22:11:14.668921   54943 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0505 22:11:14.668978   54943 ssh_runner.go:195] Run: sudo crictl images --output json
	I0505 22:11:14.705475   54943 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0505 22:11:14.705559   54943 ssh_runner.go:195] Run: which lz4
	I0505 22:11:14.710055   54943 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0505 22:11:14.714596   54943 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0505 22:11:14.714630   54943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0505 22:11:16.801088   54943 crio.go:462] duration metric: took 2.091089594s to copy over tarball
	I0505 22:11:16.801177   54943 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0505 22:11:19.729394   54943 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.928188846s)
	I0505 22:11:19.729418   54943 crio.go:469] duration metric: took 2.928296164s to extract the tarball
	I0505 22:11:19.729425   54943 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0505 22:11:19.773130   54943 ssh_runner.go:195] Run: sudo crictl images --output json
	I0505 22:11:19.836575   54943 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0505 22:11:19.836601   54943 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0505 22:11:19.836651   54943 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0505 22:11:19.836711   54943 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0505 22:11:19.836747   54943 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0505 22:11:19.836764   54943 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0505 22:11:19.836781   54943 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0505 22:11:19.836798   54943 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0505 22:11:19.836722   54943 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0505 22:11:19.836730   54943 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0505 22:11:19.838352   54943 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0505 22:11:19.838388   54943 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0505 22:11:19.838396   54943 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0505 22:11:19.838414   54943 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0505 22:11:19.838477   54943 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0505 22:11:19.838490   54943 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0505 22:11:19.838492   54943 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0505 22:11:19.838591   54943 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0505 22:11:20.014811   54943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0505 22:11:20.034574   54943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0505 22:11:20.060362   54943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0505 22:11:20.083752   54943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0505 22:11:20.090881   54943 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0505 22:11:20.090925   54943 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0505 22:11:20.090967   54943 ssh_runner.go:195] Run: which crictl
	I0505 22:11:20.090882   54943 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0505 22:11:20.091037   54943 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0505 22:11:20.091095   54943 ssh_runner.go:195] Run: which crictl
	I0505 22:11:20.150862   54943 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0505 22:11:20.150910   54943 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0505 22:11:20.150942   54943 ssh_runner.go:195] Run: which crictl
	I0505 22:11:20.150943   54943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0505 22:11:20.150945   54943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0505 22:11:20.150878   54943 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0505 22:11:20.150997   54943 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0505 22:11:20.151022   54943 ssh_runner.go:195] Run: which crictl
	I0505 22:11:20.160278   54943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0505 22:11:20.208719   54943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0505 22:11:20.212002   54943 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18602-11466/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0505 22:11:20.222286   54943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0505 22:11:20.222294   54943 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18602-11466/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0505 22:11:20.245544   54943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0505 22:11:20.255381   54943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0505 22:11:20.256656   54943 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18602-11466/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0505 22:11:20.311166   54943 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0505 22:11:20.311210   54943 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0505 22:11:20.311264   54943 ssh_runner.go:195] Run: which crictl
	I0505 22:11:20.318326   54943 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18602-11466/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0505 22:11:20.355136   54943 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0505 22:11:20.355178   54943 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0505 22:11:20.355192   54943 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0505 22:11:20.355225   54943 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0505 22:11:20.355234   54943 ssh_runner.go:195] Run: which crictl
	I0505 22:11:20.355245   54943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0505 22:11:20.355262   54943 ssh_runner.go:195] Run: which crictl
	I0505 22:11:20.361793   54943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0505 22:11:20.362986   54943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0505 22:11:20.437490   54943 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18602-11466/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0505 22:11:20.451743   54943 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18602-11466/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0505 22:11:20.451753   54943 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18602-11466/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0505 22:11:20.754112   54943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0505 22:11:20.910851   54943 cache_images.go:92] duration metric: took 1.07423622s to LoadCachedImages
	W0505 22:11:20.910932   54943 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18602-11466/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18602-11466/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0505 22:11:20.910949   54943 kubeadm.go:928] updating node { 192.168.39.41 8443 v1.20.0 crio true true} ...
	I0505 22:11:20.911093   54943 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-131082 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.41
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-131082 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0505 22:11:20.911180   54943 ssh_runner.go:195] Run: crio config
	I0505 22:11:20.971531   54943 cni.go:84] Creating CNI manager for ""
	I0505 22:11:20.971557   54943 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0505 22:11:20.971569   54943 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0505 22:11:20.971588   54943 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.41 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-131082 NodeName:kubernetes-upgrade-131082 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.41"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.41 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0505 22:11:20.971746   54943 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.41
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-131082"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.41
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.41"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0505 22:11:20.971814   54943 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0505 22:11:20.983449   54943 binaries.go:44] Found k8s binaries, skipping transfer
	I0505 22:11:20.983531   54943 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0505 22:11:20.994323   54943 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0505 22:11:21.013316   54943 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0505 22:11:21.035468   54943 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0505 22:11:21.057370   54943 ssh_runner.go:195] Run: grep 192.168.39.41	control-plane.minikube.internal$ /etc/hosts
	I0505 22:11:21.062118   54943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.41	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0505 22:11:21.076247   54943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 22:11:21.217538   54943 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0505 22:11:21.237319   54943 certs.go:68] Setting up /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/kubernetes-upgrade-131082 for IP: 192.168.39.41
	I0505 22:11:21.237346   54943 certs.go:194] generating shared ca certs ...
	I0505 22:11:21.237368   54943 certs.go:226] acquiring lock for ca certs: {Name:mk9a8f97724855697074b08194c6247f8f9e5c84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 22:11:21.237538   54943 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key
	I0505 22:11:21.237589   54943 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key
	I0505 22:11:21.237602   54943 certs.go:256] generating profile certs ...
	I0505 22:11:21.237664   54943 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/kubernetes-upgrade-131082/client.key
	I0505 22:11:21.237684   54943 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/kubernetes-upgrade-131082/client.crt with IP's: []
	I0505 22:11:21.337387   54943 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/kubernetes-upgrade-131082/client.crt ...
	I0505 22:11:21.337417   54943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/kubernetes-upgrade-131082/client.crt: {Name:mk96c09e688eec2f4807b2c76e3dfcef9947cd23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 22:11:21.337599   54943 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/kubernetes-upgrade-131082/client.key ...
	I0505 22:11:21.337620   54943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/kubernetes-upgrade-131082/client.key: {Name:mked97ae9c79aaea7aeda2e0e0f7959148dcc279 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 22:11:21.337727   54943 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/kubernetes-upgrade-131082/apiserver.key.175bb149
	I0505 22:11:21.337749   54943 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/kubernetes-upgrade-131082/apiserver.crt.175bb149 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.41]
	I0505 22:11:21.398267   54943 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/kubernetes-upgrade-131082/apiserver.crt.175bb149 ...
	I0505 22:11:21.398294   54943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/kubernetes-upgrade-131082/apiserver.crt.175bb149: {Name:mkb1911f513dd55edc205d2965e4cdac6372ff61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 22:11:21.398489   54943 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/kubernetes-upgrade-131082/apiserver.key.175bb149 ...
	I0505 22:11:21.398507   54943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/kubernetes-upgrade-131082/apiserver.key.175bb149: {Name:mk89a52a1c099936cd662a29989d993cc49dcbda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 22:11:21.398615   54943 certs.go:381] copying /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/kubernetes-upgrade-131082/apiserver.crt.175bb149 -> /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/kubernetes-upgrade-131082/apiserver.crt
	I0505 22:11:21.398693   54943 certs.go:385] copying /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/kubernetes-upgrade-131082/apiserver.key.175bb149 -> /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/kubernetes-upgrade-131082/apiserver.key
	I0505 22:11:21.398743   54943 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/kubernetes-upgrade-131082/proxy-client.key
	I0505 22:11:21.398758   54943 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/kubernetes-upgrade-131082/proxy-client.crt with IP's: []
	I0505 22:11:21.453786   54943 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/kubernetes-upgrade-131082/proxy-client.crt ...
	I0505 22:11:21.453818   54943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/kubernetes-upgrade-131082/proxy-client.crt: {Name:mkc44671047ac80e1a87fa2de64ae28a2128d255 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 22:11:21.453997   54943 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/kubernetes-upgrade-131082/proxy-client.key ...
	I0505 22:11:21.454014   54943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/kubernetes-upgrade-131082/proxy-client.key: {Name:mk6842471c4442e2d65ef1e2b871702b817e3ab8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 22:11:21.454206   54943 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798.pem (1338 bytes)
	W0505 22:11:21.454247   54943 certs.go:480] ignoring /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798_empty.pem, impossibly tiny 0 bytes
	I0505 22:11:21.454257   54943 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem (1675 bytes)
	I0505 22:11:21.454281   54943 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem (1078 bytes)
	I0505 22:11:21.454316   54943 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem (1123 bytes)
	I0505 22:11:21.454348   54943 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem (1675 bytes)
	I0505 22:11:21.454398   54943 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem (1708 bytes)
	I0505 22:11:21.455260   54943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0505 22:11:21.486324   54943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0505 22:11:21.515190   54943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0505 22:11:21.543449   54943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0505 22:11:21.572158   54943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/kubernetes-upgrade-131082/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0505 22:11:21.599315   54943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/kubernetes-upgrade-131082/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0505 22:11:21.628280   54943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/kubernetes-upgrade-131082/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0505 22:11:21.656480   54943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/kubernetes-upgrade-131082/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0505 22:11:21.691139   54943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0505 22:11:21.726652   54943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798.pem --> /usr/share/ca-certificates/18798.pem (1338 bytes)
	I0505 22:11:21.764006   54943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem --> /usr/share/ca-certificates/187982.pem (1708 bytes)
	I0505 22:11:21.794748   54943 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0505 22:11:21.817012   54943 ssh_runner.go:195] Run: openssl version
	I0505 22:11:21.823244   54943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0505 22:11:21.835228   54943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0505 22:11:21.840543   54943 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  5 20:58 /usr/share/ca-certificates/minikubeCA.pem
	I0505 22:11:21.840593   54943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0505 22:11:21.847225   54943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0505 22:11:21.860113   54943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18798.pem && ln -fs /usr/share/ca-certificates/18798.pem /etc/ssl/certs/18798.pem"
	I0505 22:11:21.872685   54943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18798.pem
	I0505 22:11:21.877851   54943 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  5 21:11 /usr/share/ca-certificates/18798.pem
	I0505 22:11:21.877913   54943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18798.pem
	I0505 22:11:21.884309   54943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18798.pem /etc/ssl/certs/51391683.0"
	I0505 22:11:21.896523   54943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/187982.pem && ln -fs /usr/share/ca-certificates/187982.pem /etc/ssl/certs/187982.pem"
	I0505 22:11:21.908626   54943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/187982.pem
	I0505 22:11:21.913653   54943 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  5 21:11 /usr/share/ca-certificates/187982.pem
	I0505 22:11:21.913702   54943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/187982.pem
	I0505 22:11:21.919867   54943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/187982.pem /etc/ssl/certs/3ec20f2e.0"
	I0505 22:11:21.931451   54943 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0505 22:11:21.936135   54943 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0505 22:11:21.936193   54943 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-131082 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-131082 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 22:11:21.936271   54943 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0505 22:11:21.936321   54943 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0505 22:11:21.978915   54943 cri.go:89] found id: ""
	I0505 22:11:21.978993   54943 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0505 22:11:21.990911   54943 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0505 22:11:22.006339   54943 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0505 22:11:22.018811   54943 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0505 22:11:22.018842   54943 kubeadm.go:156] found existing configuration files:
	
	I0505 22:11:22.018906   54943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0505 22:11:22.030709   54943 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0505 22:11:22.030767   54943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0505 22:11:22.045267   54943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0505 22:11:22.061011   54943 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0505 22:11:22.061093   54943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0505 22:11:22.074835   54943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0505 22:11:22.089181   54943 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0505 22:11:22.089262   54943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0505 22:11:22.106327   54943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0505 22:11:22.122418   54943 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0505 22:11:22.122496   54943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0505 22:11:22.139461   54943 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0505 22:11:22.449327   54943 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0505 22:13:20.349438   54943 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0505 22:13:20.349554   54943 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0505 22:13:20.352684   54943 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0505 22:13:20.352757   54943 kubeadm.go:309] [preflight] Running pre-flight checks
	I0505 22:13:20.352851   54943 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0505 22:13:20.352980   54943 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0505 22:13:20.353112   54943 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0505 22:13:20.353204   54943 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0505 22:13:20.495855   54943 out.go:204]   - Generating certificates and keys ...
	I0505 22:13:20.495985   54943 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0505 22:13:20.496077   54943 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0505 22:13:20.496172   54943 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0505 22:13:20.496257   54943 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0505 22:13:20.496335   54943 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0505 22:13:20.496409   54943 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0505 22:13:20.496479   54943 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0505 22:13:20.496663   54943 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-131082 localhost] and IPs [192.168.39.41 127.0.0.1 ::1]
	I0505 22:13:20.496731   54943 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0505 22:13:20.496917   54943 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-131082 localhost] and IPs [192.168.39.41 127.0.0.1 ::1]
	I0505 22:13:20.497009   54943 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0505 22:13:20.497096   54943 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0505 22:13:20.497161   54943 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0505 22:13:20.497232   54943 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0505 22:13:20.497307   54943 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0505 22:13:20.497374   54943 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0505 22:13:20.497458   54943 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0505 22:13:20.497534   54943 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0505 22:13:20.497622   54943 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0505 22:13:20.497773   54943 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0505 22:13:20.497827   54943 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0505 22:13:20.497914   54943 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0505 22:13:20.750536   54943 out.go:204]   - Booting up control plane ...
	I0505 22:13:20.750679   54943 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0505 22:13:20.750792   54943 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0505 22:13:20.750904   54943 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0505 22:13:20.751030   54943 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0505 22:13:20.751291   54943 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0505 22:13:20.751379   54943 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0505 22:13:20.751476   54943 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0505 22:13:20.751727   54943 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0505 22:13:20.751818   54943 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0505 22:13:20.752044   54943 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0505 22:13:20.752136   54943 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0505 22:13:20.752360   54943 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0505 22:13:20.752445   54943 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0505 22:13:20.752667   54943 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0505 22:13:20.752750   54943 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0505 22:13:20.752974   54943 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0505 22:13:20.752980   54943 kubeadm.go:309] 
	I0505 22:13:20.753028   54943 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0505 22:13:20.753082   54943 kubeadm.go:309] 		timed out waiting for the condition
	I0505 22:13:20.753089   54943 kubeadm.go:309] 
	I0505 22:13:20.753132   54943 kubeadm.go:309] 	This error is likely caused by:
	I0505 22:13:20.753170   54943 kubeadm.go:309] 		- The kubelet is not running
	I0505 22:13:20.753287   54943 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0505 22:13:20.753294   54943 kubeadm.go:309] 
	I0505 22:13:20.753415   54943 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0505 22:13:20.753451   54943 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0505 22:13:20.753488   54943 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0505 22:13:20.753495   54943 kubeadm.go:309] 
	I0505 22:13:20.753636   54943 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0505 22:13:20.753737   54943 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0505 22:13:20.753745   54943 kubeadm.go:309] 
	I0505 22:13:20.753880   54943 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0505 22:13:20.753988   54943 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0505 22:13:20.754088   54943 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0505 22:13:20.754177   54943 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	W0505 22:13:20.754329   54943 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-131082 localhost] and IPs [192.168.39.41 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-131082 localhost] and IPs [192.168.39.41 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-131082 localhost] and IPs [192.168.39.41 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-131082 localhost] and IPs [192.168.39.41 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0505 22:13:20.754385   54943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0505 22:13:20.754647   54943 kubeadm.go:309] 
	I0505 22:13:22.013397   54943 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.258983026s)
	I0505 22:13:22.013478   54943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 22:13:22.034773   54943 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0505 22:13:22.049717   54943 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0505 22:13:22.049743   54943 kubeadm.go:156] found existing configuration files:
	
	I0505 22:13:22.049800   54943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0505 22:13:22.065005   54943 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0505 22:13:22.065068   54943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0505 22:13:22.079928   54943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0505 22:13:22.093943   54943 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0505 22:13:22.094002   54943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0505 22:13:22.108609   54943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0505 22:13:22.122424   54943 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0505 22:13:22.122479   54943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0505 22:13:22.137555   54943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0505 22:13:22.151762   54943 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0505 22:13:22.151820   54943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0505 22:13:22.165988   54943 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0505 22:13:22.512031   54943 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0505 22:15:19.153082   54943 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0505 22:15:19.153202   54943 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0505 22:15:19.155074   54943 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0505 22:15:19.155141   54943 kubeadm.go:309] [preflight] Running pre-flight checks
	I0505 22:15:19.155252   54943 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0505 22:15:19.155363   54943 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0505 22:15:19.155450   54943 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0505 22:15:19.155530   54943 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0505 22:15:19.157089   54943 out.go:204]   - Generating certificates and keys ...
	I0505 22:15:19.157153   54943 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0505 22:15:19.157223   54943 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0505 22:15:19.157304   54943 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0505 22:15:19.157356   54943 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0505 22:15:19.157414   54943 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0505 22:15:19.157458   54943 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0505 22:15:19.157525   54943 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0505 22:15:19.157579   54943 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0505 22:15:19.157662   54943 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0505 22:15:19.157748   54943 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0505 22:15:19.157812   54943 kubeadm.go:309] [certs] Using the existing "sa" key
	I0505 22:15:19.157889   54943 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0505 22:15:19.157950   54943 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0505 22:15:19.158024   54943 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0505 22:15:19.158111   54943 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0505 22:15:19.158190   54943 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0505 22:15:19.158332   54943 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0505 22:15:19.158444   54943 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0505 22:15:19.158517   54943 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0505 22:15:19.158618   54943 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0505 22:15:19.160195   54943 out.go:204]   - Booting up control plane ...
	I0505 22:15:19.160307   54943 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0505 22:15:19.160417   54943 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0505 22:15:19.160504   54943 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0505 22:15:19.160611   54943 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0505 22:15:19.160775   54943 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0505 22:15:19.160823   54943 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0505 22:15:19.160879   54943 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0505 22:15:19.161077   54943 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0505 22:15:19.161195   54943 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0505 22:15:19.161375   54943 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0505 22:15:19.161437   54943 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0505 22:15:19.161614   54943 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0505 22:15:19.161678   54943 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0505 22:15:19.161946   54943 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0505 22:15:19.162039   54943 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0505 22:15:19.162272   54943 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0505 22:15:19.162293   54943 kubeadm.go:309] 
	I0505 22:15:19.162345   54943 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0505 22:15:19.162396   54943 kubeadm.go:309] 		timed out waiting for the condition
	I0505 22:15:19.162406   54943 kubeadm.go:309] 
	I0505 22:15:19.162455   54943 kubeadm.go:309] 	This error is likely caused by:
	I0505 22:15:19.162506   54943 kubeadm.go:309] 		- The kubelet is not running
	I0505 22:15:19.162620   54943 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0505 22:15:19.162633   54943 kubeadm.go:309] 
	I0505 22:15:19.162763   54943 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0505 22:15:19.162810   54943 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0505 22:15:19.162868   54943 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0505 22:15:19.162878   54943 kubeadm.go:309] 
	I0505 22:15:19.163038   54943 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0505 22:15:19.163153   54943 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0505 22:15:19.163165   54943 kubeadm.go:309] 
	I0505 22:15:19.163329   54943 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0505 22:15:19.163447   54943 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0505 22:15:19.163553   54943 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0505 22:15:19.163627   54943 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0505 22:15:19.163656   54943 kubeadm.go:309] 
	I0505 22:15:19.163710   54943 kubeadm.go:393] duration metric: took 3m57.227521956s to StartCluster
	I0505 22:15:19.163749   54943 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0505 22:15:19.163799   54943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0505 22:15:19.215982   54943 cri.go:89] found id: ""
	I0505 22:15:19.216017   54943 logs.go:276] 0 containers: []
	W0505 22:15:19.216028   54943 logs.go:278] No container was found matching "kube-apiserver"
	I0505 22:15:19.216035   54943 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0505 22:15:19.216104   54943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0505 22:15:19.253325   54943 cri.go:89] found id: ""
	I0505 22:15:19.253355   54943 logs.go:276] 0 containers: []
	W0505 22:15:19.253363   54943 logs.go:278] No container was found matching "etcd"
	I0505 22:15:19.253371   54943 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0505 22:15:19.253429   54943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0505 22:15:19.297091   54943 cri.go:89] found id: ""
	I0505 22:15:19.297120   54943 logs.go:276] 0 containers: []
	W0505 22:15:19.297129   54943 logs.go:278] No container was found matching "coredns"
	I0505 22:15:19.297135   54943 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0505 22:15:19.297183   54943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0505 22:15:19.337802   54943 cri.go:89] found id: ""
	I0505 22:15:19.337836   54943 logs.go:276] 0 containers: []
	W0505 22:15:19.337847   54943 logs.go:278] No container was found matching "kube-scheduler"
	I0505 22:15:19.337855   54943 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0505 22:15:19.337922   54943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0505 22:15:19.376698   54943 cri.go:89] found id: ""
	I0505 22:15:19.376726   54943 logs.go:276] 0 containers: []
	W0505 22:15:19.376736   54943 logs.go:278] No container was found matching "kube-proxy"
	I0505 22:15:19.376744   54943 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0505 22:15:19.376818   54943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0505 22:15:19.414431   54943 cri.go:89] found id: ""
	I0505 22:15:19.414465   54943 logs.go:276] 0 containers: []
	W0505 22:15:19.414475   54943 logs.go:278] No container was found matching "kube-controller-manager"
	I0505 22:15:19.414484   54943 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0505 22:15:19.414544   54943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0505 22:15:19.452003   54943 cri.go:89] found id: ""
	I0505 22:15:19.452028   54943 logs.go:276] 0 containers: []
	W0505 22:15:19.452037   54943 logs.go:278] No container was found matching "kindnet"
	I0505 22:15:19.452054   54943 logs.go:123] Gathering logs for kubelet ...
	I0505 22:15:19.452066   54943 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 22:15:19.505919   54943 logs.go:123] Gathering logs for dmesg ...
	I0505 22:15:19.505948   54943 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 22:15:19.521135   54943 logs.go:123] Gathering logs for describe nodes ...
	I0505 22:15:19.521159   54943 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0505 22:15:19.655967   54943 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0505 22:15:19.655992   54943 logs.go:123] Gathering logs for CRI-O ...
	I0505 22:15:19.656003   54943 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0505 22:15:19.749097   54943 logs.go:123] Gathering logs for container status ...
	I0505 22:15:19.749137   54943 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0505 22:15:19.793102   54943 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0505 22:15:19.793150   54943 out.go:239] * 
	* 
	W0505 22:15:19.793220   54943 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0505 22:15:19.793253   54943 out.go:239] * 
	* 
	W0505 22:15:19.794145   54943 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0505 22:15:19.797186   54943 out.go:177] 
	W0505 22:15:19.798478   54943 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0505 22:15:19.798522   54943 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0505 22:15:19.798545   54943 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0505 22:15:19.800016   54943 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-131082 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-131082
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-131082: (2.36970313s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-131082 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-131082 status --format={{.Host}}: exit status 7 (82.567337ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-131082 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-131082 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m14.653949076s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-131082 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-131082 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-131082 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (94.258103ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-131082] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18602
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18602-11466/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18602-11466/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-131082
	    minikube start -p kubernetes-upgrade-131082 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1310822 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.0, by running:
	    
	    minikube start -p kubernetes-upgrade-131082 --kubernetes-version=v1.30.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-131082 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0505 22:16:51.947263   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/client.crt: no such file or directory
version_upgrade_test.go:275: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-131082 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (14m43.949065159s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-131082] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18602
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18602-11466/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18602-11466/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "kubernetes-upgrade-131082" primary control-plane node in "kubernetes-upgrade-131082" cluster
	* Updating the running kvm2 "kubernetes-upgrade-131082" VM ...
	* Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 22:16:37.174137   61991 out.go:291] Setting OutFile to fd 1 ...
	I0505 22:16:37.174251   61991 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 22:16:37.174259   61991 out.go:304] Setting ErrFile to fd 2...
	I0505 22:16:37.174264   61991 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 22:16:37.174453   61991 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18602-11466/.minikube/bin
	I0505 22:16:37.174966   61991 out.go:298] Setting JSON to false
	I0505 22:16:37.175875   61991 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7144,"bootTime":1714940253,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0505 22:16:37.175933   61991 start.go:139] virtualization: kvm guest
	I0505 22:16:37.177948   61991 out.go:177] * [kubernetes-upgrade-131082] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0505 22:16:37.179165   61991 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 22:16:37.179167   61991 notify.go:220] Checking for updates...
	I0505 22:16:37.180451   61991 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 22:16:37.181795   61991 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18602-11466/kubeconfig
	I0505 22:16:37.182998   61991 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18602-11466/.minikube
	I0505 22:16:37.184221   61991 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0505 22:16:37.185486   61991 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 22:16:37.188391   61991 config.go:182] Loaded profile config "kubernetes-upgrade-131082": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 22:16:37.188945   61991 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 22:16:37.188986   61991 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 22:16:37.203708   61991 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44687
	I0505 22:16:37.204109   61991 main.go:141] libmachine: () Calling .GetVersion
	I0505 22:16:37.204651   61991 main.go:141] libmachine: Using API Version  1
	I0505 22:16:37.204673   61991 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 22:16:37.204980   61991 main.go:141] libmachine: () Calling .GetMachineName
	I0505 22:16:37.205163   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .DriverName
	I0505 22:16:37.205378   61991 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 22:16:37.205649   61991 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 22:16:37.205684   61991 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 22:16:37.219600   61991 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41715
	I0505 22:16:37.219961   61991 main.go:141] libmachine: () Calling .GetVersion
	I0505 22:16:37.220353   61991 main.go:141] libmachine: Using API Version  1
	I0505 22:16:37.220375   61991 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 22:16:37.220657   61991 main.go:141] libmachine: () Calling .GetMachineName
	I0505 22:16:37.220831   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .DriverName
	I0505 22:16:37.255618   61991 out.go:177] * Using the kvm2 driver based on existing profile
	I0505 22:16:37.257034   61991 start.go:297] selected driver: kvm2
	I0505 22:16:37.257057   61991 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-131082 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.30.0 ClusterName:kubernetes-upgrade-131082 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 22:16:37.257180   61991 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 22:16:37.257789   61991 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 22:16:37.257858   61991 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18602-11466/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0505 22:16:37.272788   61991 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0505 22:16:37.273181   61991 cni.go:84] Creating CNI manager for ""
	I0505 22:16:37.273197   61991 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0505 22:16:37.273245   61991 start.go:340] cluster config:
	{Name:kubernetes-upgrade-131082 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:kubernetes-upgrade-131082 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 22:16:37.273345   61991 iso.go:125] acquiring lock: {Name:mk05a37c5afd8d748706c016c1665e3846f7161e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 22:16:37.275004   61991 out.go:177] * Starting "kubernetes-upgrade-131082" primary control-plane node in "kubernetes-upgrade-131082" cluster
	I0505 22:16:37.276150   61991 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0505 22:16:37.276184   61991 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18602-11466/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0505 22:16:37.276203   61991 cache.go:56] Caching tarball of preloaded images
	I0505 22:16:37.276272   61991 preload.go:173] Found /home/jenkins/minikube-integration/18602-11466/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0505 22:16:37.276283   61991 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0505 22:16:37.276387   61991 profile.go:143] Saving config to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/kubernetes-upgrade-131082/config.json ...
	I0505 22:16:37.276616   61991 start.go:360] acquireMachinesLock for kubernetes-upgrade-131082: {Name:mk7f653ea73351d572a4896c6b37d2e7aa44b4ac Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 22:17:14.609827   61991 start.go:364] duration metric: took 37.333168615s to acquireMachinesLock for "kubernetes-upgrade-131082"
	I0505 22:17:14.609879   61991 start.go:96] Skipping create...Using existing machine configuration
	I0505 22:17:14.609889   61991 fix.go:54] fixHost starting: 
	I0505 22:17:14.610282   61991 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 22:17:14.610334   61991 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 22:17:14.626973   61991 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37217
	I0505 22:17:14.627435   61991 main.go:141] libmachine: () Calling .GetVersion
	I0505 22:17:14.627906   61991 main.go:141] libmachine: Using API Version  1
	I0505 22:17:14.627931   61991 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 22:17:14.628215   61991 main.go:141] libmachine: () Calling .GetMachineName
	I0505 22:17:14.628416   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .DriverName
	I0505 22:17:14.628603   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetState
	I0505 22:17:14.630155   61991 fix.go:112] recreateIfNeeded on kubernetes-upgrade-131082: state=Running err=<nil>
	W0505 22:17:14.630177   61991 fix.go:138] unexpected machine state, will restart: <nil>
	I0505 22:17:14.632393   61991 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-131082" VM ...
	I0505 22:17:14.633796   61991 machine.go:94] provisionDockerMachine start ...
	I0505 22:17:14.633823   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .DriverName
	I0505 22:17:14.634036   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHHostname
	I0505 22:17:14.636765   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:17:14.637174   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:c6:ca", ip: ""} in network mk-kubernetes-upgrade-131082: {Iface:virbr1 ExpiryTime:2024-05-05 23:16:01 +0000 UTC Type:0 Mac:52:54:00:19:c6:ca Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:kubernetes-upgrade-131082 Clientid:01:52:54:00:19:c6:ca}
	I0505 22:17:14.637227   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined IP address 192.168.39.41 and MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:17:14.637385   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHPort
	I0505 22:17:14.637583   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHKeyPath
	I0505 22:17:14.637794   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHKeyPath
	I0505 22:17:14.637967   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHUsername
	I0505 22:17:14.638139   61991 main.go:141] libmachine: Using SSH client type: native
	I0505 22:17:14.638372   61991 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I0505 22:17:14.638388   61991 main.go:141] libmachine: About to run SSH command:
	hostname
	I0505 22:17:14.752759   61991 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-131082
	
	I0505 22:17:14.752792   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetMachineName
	I0505 22:17:14.753089   61991 buildroot.go:166] provisioning hostname "kubernetes-upgrade-131082"
	I0505 22:17:14.753119   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetMachineName
	I0505 22:17:14.753318   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHHostname
	I0505 22:17:14.756030   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:17:14.756360   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:c6:ca", ip: ""} in network mk-kubernetes-upgrade-131082: {Iface:virbr1 ExpiryTime:2024-05-05 23:16:01 +0000 UTC Type:0 Mac:52:54:00:19:c6:ca Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:kubernetes-upgrade-131082 Clientid:01:52:54:00:19:c6:ca}
	I0505 22:17:14.756393   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined IP address 192.168.39.41 and MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:17:14.756519   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHPort
	I0505 22:17:14.756715   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHKeyPath
	I0505 22:17:14.756868   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHKeyPath
	I0505 22:17:14.757021   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHUsername
	I0505 22:17:14.757174   61991 main.go:141] libmachine: Using SSH client type: native
	I0505 22:17:14.757361   61991 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I0505 22:17:14.757381   61991 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-131082 && echo "kubernetes-upgrade-131082" | sudo tee /etc/hostname
	I0505 22:17:14.888756   61991 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-131082
	
	I0505 22:17:14.888783   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHHostname
	I0505 22:17:14.891477   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:17:14.891843   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:c6:ca", ip: ""} in network mk-kubernetes-upgrade-131082: {Iface:virbr1 ExpiryTime:2024-05-05 23:16:01 +0000 UTC Type:0 Mac:52:54:00:19:c6:ca Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:kubernetes-upgrade-131082 Clientid:01:52:54:00:19:c6:ca}
	I0505 22:17:14.891875   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined IP address 192.168.39.41 and MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:17:14.892010   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHPort
	I0505 22:17:14.892204   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHKeyPath
	I0505 22:17:14.892390   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHKeyPath
	I0505 22:17:14.892562   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHUsername
	I0505 22:17:14.892741   61991 main.go:141] libmachine: Using SSH client type: native
	I0505 22:17:14.892911   61991 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I0505 22:17:14.892926   61991 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-131082' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-131082/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-131082' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0505 22:17:15.015395   61991 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0505 22:17:15.015449   61991 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18602-11466/.minikube CaCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18602-11466/.minikube}
	I0505 22:17:15.015502   61991 buildroot.go:174] setting up certificates
	I0505 22:17:15.015520   61991 provision.go:84] configureAuth start
	I0505 22:17:15.015534   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetMachineName
	I0505 22:17:15.015820   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetIP
	I0505 22:17:15.018511   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:17:15.018872   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:c6:ca", ip: ""} in network mk-kubernetes-upgrade-131082: {Iface:virbr1 ExpiryTime:2024-05-05 23:16:01 +0000 UTC Type:0 Mac:52:54:00:19:c6:ca Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:kubernetes-upgrade-131082 Clientid:01:52:54:00:19:c6:ca}
	I0505 22:17:15.018907   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined IP address 192.168.39.41 and MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:17:15.019115   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHHostname
	I0505 22:17:15.021647   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:17:15.022045   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:c6:ca", ip: ""} in network mk-kubernetes-upgrade-131082: {Iface:virbr1 ExpiryTime:2024-05-05 23:16:01 +0000 UTC Type:0 Mac:52:54:00:19:c6:ca Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:kubernetes-upgrade-131082 Clientid:01:52:54:00:19:c6:ca}
	I0505 22:17:15.022075   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined IP address 192.168.39.41 and MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:17:15.022255   61991 provision.go:143] copyHostCerts
	I0505 22:17:15.022309   61991 exec_runner.go:144] found /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem, removing ...
	I0505 22:17:15.022344   61991 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem
	I0505 22:17:15.022412   61991 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem (1675 bytes)
	I0505 22:17:15.022542   61991 exec_runner.go:144] found /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem, removing ...
	I0505 22:17:15.022555   61991 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem
	I0505 22:17:15.022586   61991 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem (1078 bytes)
	I0505 22:17:15.022678   61991 exec_runner.go:144] found /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem, removing ...
	I0505 22:17:15.022688   61991 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem
	I0505 22:17:15.022714   61991 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem (1123 bytes)
	I0505 22:17:15.022799   61991 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-131082 san=[127.0.0.1 192.168.39.41 kubernetes-upgrade-131082 localhost minikube]
	I0505 22:17:15.184015   61991 provision.go:177] copyRemoteCerts
	I0505 22:17:15.184079   61991 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0505 22:17:15.184126   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHHostname
	I0505 22:17:15.187974   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:17:15.188399   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:c6:ca", ip: ""} in network mk-kubernetes-upgrade-131082: {Iface:virbr1 ExpiryTime:2024-05-05 23:16:01 +0000 UTC Type:0 Mac:52:54:00:19:c6:ca Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:kubernetes-upgrade-131082 Clientid:01:52:54:00:19:c6:ca}
	I0505 22:17:15.188439   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined IP address 192.168.39.41 and MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:17:15.188730   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHPort
	I0505 22:17:15.188945   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHKeyPath
	I0505 22:17:15.189122   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHUsername
	I0505 22:17:15.189268   61991 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/kubernetes-upgrade-131082/id_rsa Username:docker}
	I0505 22:17:15.280821   61991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0505 22:17:15.314824   61991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0505 22:17:15.345243   61991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0505 22:17:15.374558   61991 provision.go:87] duration metric: took 359.026597ms to configureAuth
	I0505 22:17:15.374584   61991 buildroot.go:189] setting minikube options for container-runtime
	I0505 22:17:15.374780   61991 config.go:182] Loaded profile config "kubernetes-upgrade-131082": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 22:17:15.374868   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHHostname
	I0505 22:17:15.377879   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:17:15.378267   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:c6:ca", ip: ""} in network mk-kubernetes-upgrade-131082: {Iface:virbr1 ExpiryTime:2024-05-05 23:16:01 +0000 UTC Type:0 Mac:52:54:00:19:c6:ca Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:kubernetes-upgrade-131082 Clientid:01:52:54:00:19:c6:ca}
	I0505 22:17:15.378321   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined IP address 192.168.39.41 and MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:17:15.378464   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHPort
	I0505 22:17:15.378676   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHKeyPath
	I0505 22:17:15.378880   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHKeyPath
	I0505 22:17:15.379101   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHUsername
	I0505 22:17:15.379334   61991 main.go:141] libmachine: Using SSH client type: native
	I0505 22:17:15.379580   61991 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I0505 22:17:15.379611   61991 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0505 22:17:23.057399   61991 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0505 22:17:23.057427   61991 machine.go:97] duration metric: took 8.423612297s to provisionDockerMachine
	I0505 22:17:23.057440   61991 start.go:293] postStartSetup for "kubernetes-upgrade-131082" (driver="kvm2")
	I0505 22:17:23.057473   61991 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0505 22:17:23.057496   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .DriverName
	I0505 22:17:23.057921   61991 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0505 22:17:23.057955   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHHostname
	I0505 22:17:23.060951   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:17:23.061400   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:c6:ca", ip: ""} in network mk-kubernetes-upgrade-131082: {Iface:virbr1 ExpiryTime:2024-05-05 23:16:01 +0000 UTC Type:0 Mac:52:54:00:19:c6:ca Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:kubernetes-upgrade-131082 Clientid:01:52:54:00:19:c6:ca}
	I0505 22:17:23.061434   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined IP address 192.168.39.41 and MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:17:23.061598   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHPort
	I0505 22:17:23.061885   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHKeyPath
	I0505 22:17:23.062057   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHUsername
	I0505 22:17:23.062198   61991 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/kubernetes-upgrade-131082/id_rsa Username:docker}
	I0505 22:17:23.152371   61991 ssh_runner.go:195] Run: cat /etc/os-release
	I0505 22:17:23.157526   61991 info.go:137] Remote host: Buildroot 2023.02.9
	I0505 22:17:23.157552   61991 filesync.go:126] Scanning /home/jenkins/minikube-integration/18602-11466/.minikube/addons for local assets ...
	I0505 22:17:23.157609   61991 filesync.go:126] Scanning /home/jenkins/minikube-integration/18602-11466/.minikube/files for local assets ...
	I0505 22:17:23.157685   61991 filesync.go:149] local asset: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem -> 187982.pem in /etc/ssl/certs
	I0505 22:17:23.157790   61991 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0505 22:17:23.169165   61991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem --> /etc/ssl/certs/187982.pem (1708 bytes)
	I0505 22:17:23.198419   61991 start.go:296] duration metric: took 140.96506ms for postStartSetup
	I0505 22:17:23.198463   61991 fix.go:56] duration metric: took 8.588575633s for fixHost
	I0505 22:17:23.198486   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHHostname
	I0505 22:17:23.201317   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:17:23.201661   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:c6:ca", ip: ""} in network mk-kubernetes-upgrade-131082: {Iface:virbr1 ExpiryTime:2024-05-05 23:16:01 +0000 UTC Type:0 Mac:52:54:00:19:c6:ca Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:kubernetes-upgrade-131082 Clientid:01:52:54:00:19:c6:ca}
	I0505 22:17:23.201693   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined IP address 192.168.39.41 and MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:17:23.201888   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHPort
	I0505 22:17:23.202117   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHKeyPath
	I0505 22:17:23.202287   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHKeyPath
	I0505 22:17:23.202429   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHUsername
	I0505 22:17:23.202624   61991 main.go:141] libmachine: Using SSH client type: native
	I0505 22:17:23.202841   61991 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I0505 22:17:23.202858   61991 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0505 22:17:23.321304   61991 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714947443.315382322
	
	I0505 22:17:23.321329   61991 fix.go:216] guest clock: 1714947443.315382322
	I0505 22:17:23.321339   61991 fix.go:229] Guest: 2024-05-05 22:17:23.315382322 +0000 UTC Remote: 2024-05-05 22:17:23.198468195 +0000 UTC m=+46.073030575 (delta=116.914127ms)
	I0505 22:17:23.321382   61991 fix.go:200] guest clock delta is within tolerance: 116.914127ms
	I0505 22:17:23.321394   61991 start.go:83] releasing machines lock for "kubernetes-upgrade-131082", held for 8.711538549s
	I0505 22:17:23.321427   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .DriverName
	I0505 22:17:23.321730   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetIP
	I0505 22:17:23.324812   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:17:23.325261   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:c6:ca", ip: ""} in network mk-kubernetes-upgrade-131082: {Iface:virbr1 ExpiryTime:2024-05-05 23:16:01 +0000 UTC Type:0 Mac:52:54:00:19:c6:ca Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:kubernetes-upgrade-131082 Clientid:01:52:54:00:19:c6:ca}
	I0505 22:17:23.325296   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined IP address 192.168.39.41 and MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:17:23.325480   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .DriverName
	I0505 22:17:23.326071   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .DriverName
	I0505 22:17:23.326257   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .DriverName
	I0505 22:17:23.326354   61991 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0505 22:17:23.326399   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHHostname
	I0505 22:17:23.326505   61991 ssh_runner.go:195] Run: cat /version.json
	I0505 22:17:23.326532   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHHostname
	I0505 22:17:23.329385   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:17:23.329647   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:17:23.329846   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:c6:ca", ip: ""} in network mk-kubernetes-upgrade-131082: {Iface:virbr1 ExpiryTime:2024-05-05 23:16:01 +0000 UTC Type:0 Mac:52:54:00:19:c6:ca Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:kubernetes-upgrade-131082 Clientid:01:52:54:00:19:c6:ca}
	I0505 22:17:23.329926   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined IP address 192.168.39.41 and MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:17:23.330082   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:c6:ca", ip: ""} in network mk-kubernetes-upgrade-131082: {Iface:virbr1 ExpiryTime:2024-05-05 23:16:01 +0000 UTC Type:0 Mac:52:54:00:19:c6:ca Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:kubernetes-upgrade-131082 Clientid:01:52:54:00:19:c6:ca}
	I0505 22:17:23.330110   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined IP address 192.168.39.41 and MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:17:23.330253   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHPort
	I0505 22:17:23.330455   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHPort
	I0505 22:17:23.330458   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHKeyPath
	I0505 22:17:23.330724   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHKeyPath
	I0505 22:17:23.330727   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHUsername
	I0505 22:17:23.330892   61991 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/kubernetes-upgrade-131082/id_rsa Username:docker}
	I0505 22:17:23.331200   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetSSHUsername
	I0505 22:17:23.331391   61991 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/kubernetes-upgrade-131082/id_rsa Username:docker}
	I0505 22:17:23.548552   61991 ssh_runner.go:195] Run: systemctl --version
	I0505 22:17:23.588405   61991 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0505 22:17:24.070759   61991 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0505 22:17:24.091779   61991 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0505 22:17:24.091860   61991 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0505 22:17:24.142521   61991 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0505 22:17:24.142550   61991 start.go:494] detecting cgroup driver to use...
	I0505 22:17:24.142616   61991 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0505 22:17:24.180354   61991 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0505 22:17:24.226431   61991 docker.go:217] disabling cri-docker service (if available) ...
	I0505 22:17:24.226491   61991 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0505 22:17:24.384035   61991 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0505 22:17:24.506489   61991 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0505 22:17:24.897281   61991 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0505 22:17:25.260721   61991 docker.go:233] disabling docker service ...
	I0505 22:17:25.260796   61991 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0505 22:17:25.286712   61991 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0505 22:17:25.305160   61991 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0505 22:17:25.513432   61991 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0505 22:17:25.803319   61991 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0505 22:17:25.846927   61991 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0505 22:17:25.878147   61991 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0505 22:17:25.878228   61991 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 22:17:25.897789   61991 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0505 22:17:25.897877   61991 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 22:17:25.915632   61991 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 22:17:25.928862   61991 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 22:17:25.943496   61991 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0505 22:17:25.957523   61991 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 22:17:25.971071   61991 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 22:17:26.024444   61991 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 22:17:26.068639   61991 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0505 22:17:26.098041   61991 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0505 22:17:26.131783   61991 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 22:17:26.349064   61991 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0505 22:18:57.014371   61991 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.665250291s)
	I0505 22:18:57.014426   61991 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0505 22:18:57.014488   61991 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0505 22:18:57.022855   61991 start.go:562] Will wait 60s for crictl version
	I0505 22:18:57.022923   61991 ssh_runner.go:195] Run: which crictl
	I0505 22:18:57.027840   61991 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0505 22:18:57.075939   61991 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0505 22:18:57.076028   61991 ssh_runner.go:195] Run: crio --version
	I0505 22:18:57.114927   61991 ssh_runner.go:195] Run: crio --version
	I0505 22:18:57.153915   61991 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0505 22:18:57.155332   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) Calling .GetIP
	I0505 22:18:57.158196   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:18:57.158550   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:c6:ca", ip: ""} in network mk-kubernetes-upgrade-131082: {Iface:virbr1 ExpiryTime:2024-05-05 23:16:01 +0000 UTC Type:0 Mac:52:54:00:19:c6:ca Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:kubernetes-upgrade-131082 Clientid:01:52:54:00:19:c6:ca}
	I0505 22:18:57.158583   61991 main.go:141] libmachine: (kubernetes-upgrade-131082) DBG | domain kubernetes-upgrade-131082 has defined IP address 192.168.39.41 and MAC address 52:54:00:19:c6:ca in network mk-kubernetes-upgrade-131082
	I0505 22:18:57.158881   61991 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0505 22:18:57.163883   61991 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-131082 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.0 ClusterName:kubernetes-upgrade-131082 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0505 22:18:57.163993   61991 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0505 22:18:57.164031   61991 ssh_runner.go:195] Run: sudo crictl images --output json
	I0505 22:18:57.221583   61991 crio.go:514] all images are preloaded for cri-o runtime.
	I0505 22:18:57.221609   61991 crio.go:433] Images already preloaded, skipping extraction
	I0505 22:18:57.221668   61991 ssh_runner.go:195] Run: sudo crictl images --output json
	I0505 22:18:57.264011   61991 crio.go:514] all images are preloaded for cri-o runtime.
	I0505 22:18:57.264036   61991 cache_images.go:84] Images are preloaded, skipping loading
	I0505 22:18:57.264051   61991 kubeadm.go:928] updating node { 192.168.39.41 8443 v1.30.0 crio true true} ...
	I0505 22:18:57.264163   61991 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-131082 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.41
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:kubernetes-upgrade-131082 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0505 22:18:57.264228   61991 ssh_runner.go:195] Run: crio config
	I0505 22:18:57.320474   61991 cni.go:84] Creating CNI manager for ""
	I0505 22:18:57.320501   61991 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0505 22:18:57.320513   61991 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0505 22:18:57.320538   61991 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.41 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-131082 NodeName:kubernetes-upgrade-131082 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.41"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.41 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0505 22:18:57.320686   61991 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.41
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-131082"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.41
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.41"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0505 22:18:57.320746   61991 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0505 22:18:57.336664   61991 binaries.go:44] Found k8s binaries, skipping transfer
	I0505 22:18:57.336756   61991 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0505 22:18:57.348613   61991 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0505 22:18:57.370170   61991 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0505 22:18:57.393534   61991 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0505 22:18:57.414366   61991 ssh_runner.go:195] Run: grep 192.168.39.41	control-plane.minikube.internal$ /etc/hosts
	I0505 22:18:57.419077   61991 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 22:18:57.611685   61991 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0505 22:18:57.631348   61991 certs.go:68] Setting up /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/kubernetes-upgrade-131082 for IP: 192.168.39.41
	I0505 22:18:57.631377   61991 certs.go:194] generating shared ca certs ...
	I0505 22:18:57.631399   61991 certs.go:226] acquiring lock for ca certs: {Name:mk9a8f97724855697074b08194c6247f8f9e5c84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 22:18:57.631592   61991 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key
	I0505 22:18:57.631662   61991 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key
	I0505 22:18:57.631678   61991 certs.go:256] generating profile certs ...
	I0505 22:18:57.631790   61991 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/kubernetes-upgrade-131082/client.key
	I0505 22:18:57.631853   61991 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/kubernetes-upgrade-131082/apiserver.key.175bb149
	I0505 22:18:57.631918   61991 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/kubernetes-upgrade-131082/proxy-client.key
	I0505 22:18:57.632058   61991 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798.pem (1338 bytes)
	W0505 22:18:57.632090   61991 certs.go:480] ignoring /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798_empty.pem, impossibly tiny 0 bytes
	I0505 22:18:57.632099   61991 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem (1675 bytes)
	I0505 22:18:57.632137   61991 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem (1078 bytes)
	I0505 22:18:57.632175   61991 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem (1123 bytes)
	I0505 22:18:57.632202   61991 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem (1675 bytes)
	I0505 22:18:57.632256   61991 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem (1708 bytes)
	I0505 22:18:57.632832   61991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0505 22:18:57.669766   61991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0505 22:18:57.705626   61991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0505 22:18:57.734184   61991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0505 22:18:57.763838   61991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/kubernetes-upgrade-131082/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0505 22:18:57.791515   61991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/kubernetes-upgrade-131082/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0505 22:18:57.819081   61991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/kubernetes-upgrade-131082/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0505 22:18:57.848333   61991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/kubernetes-upgrade-131082/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0505 22:18:57.878290   61991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem --> /usr/share/ca-certificates/187982.pem (1708 bytes)
	I0505 22:18:57.908913   61991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0505 22:18:57.938385   61991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798.pem --> /usr/share/ca-certificates/18798.pem (1338 bytes)
	I0505 22:18:57.968087   61991 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0505 22:18:57.988485   61991 ssh_runner.go:195] Run: openssl version
	I0505 22:18:57.995669   61991 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18798.pem && ln -fs /usr/share/ca-certificates/18798.pem /etc/ssl/certs/18798.pem"
	I0505 22:18:58.009614   61991 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18798.pem
	I0505 22:18:58.017601   61991 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  5 21:11 /usr/share/ca-certificates/18798.pem
	I0505 22:18:58.017668   61991 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18798.pem
	I0505 22:18:58.024611   61991 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18798.pem /etc/ssl/certs/51391683.0"
	I0505 22:18:58.035981   61991 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/187982.pem && ln -fs /usr/share/ca-certificates/187982.pem /etc/ssl/certs/187982.pem"
	I0505 22:18:58.049788   61991 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/187982.pem
	I0505 22:18:58.055460   61991 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  5 21:11 /usr/share/ca-certificates/187982.pem
	I0505 22:18:58.055538   61991 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/187982.pem
	I0505 22:18:58.062777   61991 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/187982.pem /etc/ssl/certs/3ec20f2e.0"
	I0505 22:18:58.073569   61991 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0505 22:18:58.085838   61991 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0505 22:18:58.092104   61991 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  5 20:58 /usr/share/ca-certificates/minikubeCA.pem
	I0505 22:18:58.092173   61991 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0505 22:18:58.098948   61991 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0505 22:18:58.109772   61991 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0505 22:18:58.115448   61991 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0505 22:18:58.121786   61991 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0505 22:18:58.127905   61991 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0505 22:18:58.134395   61991 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0505 22:18:58.141005   61991 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0505 22:18:58.147186   61991 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0505 22:18:58.153961   61991 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-131082 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.30.0 ClusterName:kubernetes-upgrade-131082 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 22:18:58.154063   61991 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0505 22:18:58.154113   61991 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0505 22:18:58.197730   61991 cri.go:89] found id: "54911465ee113db5f747571a9c07fba0ed152d2b1391c60d9d07b5fd72907992"
	I0505 22:18:58.197749   61991 cri.go:89] found id: "fecfddde7c2de657a8b7e5feab435fc7d144e45be6c98625e6e66242a9973e96"
	I0505 22:18:58.197754   61991 cri.go:89] found id: "6f95b90433d9ae9086ea8f75617199cd331bd8a2ebdbfad4ca221323437ab022"
	I0505 22:18:58.197758   61991 cri.go:89] found id: "6a518b7ce18881f9cc4a6ddbfb748ee9fa1c9a82e5f2c6a1c8592af0e1b56e5d"
	I0505 22:18:58.197767   61991 cri.go:89] found id: "8327636edffc6ca54c16777e2ada193a1f1efa450299f0cd4d399217a204959b"
	I0505 22:18:58.197770   61991 cri.go:89] found id: "1d15b10964ec9ca3b9efe609fbc4ba4cb7486367bb3f8a614e0b58c299a93fdf"
	I0505 22:18:58.197773   61991 cri.go:89] found id: "580441ae6e97b0071c51fdb8003f9e9a61b23de7009208c0fce85010d254d0b7"
	I0505 22:18:58.197775   61991 cri.go:89] found id: "72bdfdbff0904cda7030f2ac3f8eb0a5242b9e4ee0402b8d86143e5768ecf501"
	I0505 22:18:58.197778   61991 cri.go:89] found id: "e9f6d0f2f5dd1b0832da4967a52fcc84c465453e128c7e2ede46b39cc09f7827"
	I0505 22:18:58.197785   61991 cri.go:89] found id: "4b2fad3748cc2a952a00208be802e8ab180e28e9a1da9762353b387741a5b45a"
	I0505 22:18:58.197787   61991 cri.go:89] found id: "fcac44db7c4c3d041d383aa7daa6acca3c3cb57bce190c518588a51b8abb331d"
	I0505 22:18:58.197790   61991 cri.go:89] found id: "acee89cd0b3a9a9edf9cfb92858669c39fea9e1d0119cac4951169a451898a10"
	I0505 22:18:58.197793   61991 cri.go:89] found id: "ad9907e32a3f3ad7600c1e43c3297f872f249038c2e02f45ebd3f5421f782609"
	I0505 22:18:58.197796   61991 cri.go:89] found id: "6b9923a46d6b72145685dbf6bd23e0f080b3c1c206be0d435284736bb09e8340"
	I0505 22:18:58.197800   61991 cri.go:89] found id: ""
	I0505 22:18:58.197846   61991 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
version_upgrade_test.go:277: start after failed upgrade: out/minikube-linux-amd64 start -p kubernetes-upgrade-131082 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-05-05 22:31:21.080800039 +0000 UTC m=+5646.453544166
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-131082 -n kubernetes-upgrade-131082
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-131082 -n kubernetes-upgrade-131082: exit status 2 (264.064051ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-131082 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-131082 logs -n 25: (1.446877546s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p pause-111649                                        | pause-111649           | jenkins | v1.33.0 | 05 May 24 22:16 UTC | 05 May 24 22:16 UTC |
	|         | --alsologtostderr -v=5                                 |                        |         |         |                     |                     |
	| unpause | -p pause-111649                                        | pause-111649           | jenkins | v1.33.0 | 05 May 24 22:16 UTC | 05 May 24 22:16 UTC |
	|         | --alsologtostderr -v=5                                 |                        |         |         |                     |                     |
	| pause   | -p pause-111649                                        | pause-111649           | jenkins | v1.33.0 | 05 May 24 22:16 UTC | 05 May 24 22:16 UTC |
	|         | --alsologtostderr -v=5                                 |                        |         |         |                     |                     |
	| delete  | -p pause-111649                                        | pause-111649           | jenkins | v1.33.0 | 05 May 24 22:16 UTC | 05 May 24 22:16 UTC |
	|         | --alsologtostderr -v=5                                 |                        |         |         |                     |                     |
	| delete  | -p pause-111649                                        | pause-111649           | jenkins | v1.33.0 | 05 May 24 22:16 UTC | 05 May 24 22:16 UTC |
	| start   | -p old-k8s-version-512320                              | old-k8s-version-512320 | jenkins | v1.33.0 | 05 May 24 22:16 UTC |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --kvm-network=default                                  |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                        |         |         |                     |                     |
	|         | --keep-context=false                                   |                        |         |         |                     |                     |
	|         | --driver=kvm2                                          |                        |         |         |                     |                     |
	|         | --container-runtime=crio                               |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                        |         |         |                     |                     |
	| ssh     | cert-options-759256 ssh                                | cert-options-759256    | jenkins | v1.33.0 | 05 May 24 22:17 UTC | 05 May 24 22:17 UTC |
	|         | openssl x509 -text -noout -in                          |                        |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                        |         |         |                     |                     |
	| ssh     | -p cert-options-759256 -- sudo                         | cert-options-759256    | jenkins | v1.33.0 | 05 May 24 22:17 UTC | 05 May 24 22:17 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                        |         |         |                     |                     |
	| delete  | -p cert-options-759256                                 | cert-options-759256    | jenkins | v1.33.0 | 05 May 24 22:17 UTC | 05 May 24 22:17 UTC |
	| start   | -p no-preload-112135                                   | no-preload-112135      | jenkins | v1.33.0 | 05 May 24 22:17 UTC | 05 May 24 22:19 UTC |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-112135             | no-preload-112135      | jenkins | v1.33.0 | 05 May 24 22:19 UTC | 05 May 24 22:19 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p no-preload-112135                                   | no-preload-112135      | jenkins | v1.33.0 | 05 May 24 22:19 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| start   | -p cert-expiration-239335                              | cert-expiration-239335 | jenkins | v1.33.0 | 05 May 24 22:20 UTC | 05 May 24 22:20 UTC |
	|         | --memory=2048                                          |                        |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                        |         |         |                     |                     |
	|         | --driver=kvm2                                          |                        |         |         |                     |                     |
	|         | --container-runtime=crio                               |                        |         |         |                     |                     |
	| delete  | -p cert-expiration-239335                              | cert-expiration-239335 | jenkins | v1.33.0 | 05 May 24 22:20 UTC | 05 May 24 22:20 UTC |
	| start   | -p embed-certs-778109                                  | embed-certs-778109     | jenkins | v1.33.0 | 05 May 24 22:20 UTC | 05 May 24 22:22 UTC |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                        |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-112135                  | no-preload-112135      | jenkins | v1.33.0 | 05 May 24 22:21 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p no-preload-112135                                   | no-preload-112135      | jenkins | v1.33.0 | 05 May 24 22:21 UTC |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-512320        | old-k8s-version-512320 | jenkins | v1.33.0 | 05 May 24 22:21 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-778109            | embed-certs-778109     | jenkins | v1.33.0 | 05 May 24 22:22 UTC | 05 May 24 22:22 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p embed-certs-778109                                  | embed-certs-778109     | jenkins | v1.33.0 | 05 May 24 22:22 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| stop    | -p old-k8s-version-512320                              | old-k8s-version-512320 | jenkins | v1.33.0 | 05 May 24 22:23 UTC | 05 May 24 22:23 UTC |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-512320             | old-k8s-version-512320 | jenkins | v1.33.0 | 05 May 24 22:23 UTC | 05 May 24 22:23 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p old-k8s-version-512320                              | old-k8s-version-512320 | jenkins | v1.33.0 | 05 May 24 22:23 UTC |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --kvm-network=default                                  |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                        |         |         |                     |                     |
	|         | --keep-context=false                                   |                        |         |         |                     |                     |
	|         | --driver=kvm2                                          |                        |         |         |                     |                     |
	|         | --container-runtime=crio                               |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                        |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-778109                 | embed-certs-778109     | jenkins | v1.33.0 | 05 May 24 22:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p embed-certs-778109                                  | embed-certs-778109     | jenkins | v1.33.0 | 05 May 24 22:25 UTC |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                        |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/05 22:25:19
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0505 22:25:19.859541   66662 out.go:291] Setting OutFile to fd 1 ...
	I0505 22:25:19.859774   66662 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 22:25:19.859783   66662 out.go:304] Setting ErrFile to fd 2...
	I0505 22:25:19.859787   66662 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 22:25:19.859971   66662 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18602-11466/.minikube/bin
	I0505 22:25:19.860512   66662 out.go:298] Setting JSON to false
	I0505 22:25:19.861398   66662 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7667,"bootTime":1714940253,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0505 22:25:19.861457   66662 start.go:139] virtualization: kvm guest
	I0505 22:25:19.863642   66662 out.go:177] * [embed-certs-778109] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0505 22:25:19.864924   66662 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 22:25:19.864934   66662 notify.go:220] Checking for updates...
	I0505 22:25:19.867571   66662 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 22:25:19.868846   66662 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18602-11466/kubeconfig
	I0505 22:25:19.870104   66662 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18602-11466/.minikube
	I0505 22:25:19.871372   66662 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0505 22:25:19.872551   66662 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 22:25:19.873940   66662 config.go:182] Loaded profile config "embed-certs-778109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 22:25:19.874298   66662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 22:25:19.874344   66662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 22:25:19.889126   66662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46157
	I0505 22:25:19.889548   66662 main.go:141] libmachine: () Calling .GetVersion
	I0505 22:25:19.890089   66662 main.go:141] libmachine: Using API Version  1
	I0505 22:25:19.890110   66662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 22:25:19.890444   66662 main.go:141] libmachine: () Calling .GetMachineName
	I0505 22:25:19.890628   66662 main.go:141] libmachine: (embed-certs-778109) Calling .DriverName
	I0505 22:25:19.890932   66662 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 22:25:19.891295   66662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 22:25:19.891330   66662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 22:25:19.906451   66662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44437
	I0505 22:25:19.906818   66662 main.go:141] libmachine: () Calling .GetVersion
	I0505 22:25:19.907369   66662 main.go:141] libmachine: Using API Version  1
	I0505 22:25:19.907396   66662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 22:25:19.907738   66662 main.go:141] libmachine: () Calling .GetMachineName
	I0505 22:25:19.907924   66662 main.go:141] libmachine: (embed-certs-778109) Calling .DriverName
	I0505 22:25:19.939663   66662 out.go:177] * Using the kvm2 driver based on existing profile
	I0505 22:25:19.941254   66662 start.go:297] selected driver: kvm2
	I0505 22:25:19.941274   66662 start.go:901] validating driver "kvm2" against &{Name:embed-certs-778109 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:embed-certs-778109 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.90 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 22:25:19.941419   66662 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 22:25:19.942389   66662 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 22:25:19.942491   66662 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18602-11466/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0505 22:25:19.956593   66662 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0505 22:25:19.956974   66662 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0505 22:25:19.957033   66662 cni.go:84] Creating CNI manager for ""
	I0505 22:25:19.957046   66662 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0505 22:25:19.957083   66662 start.go:340] cluster config:
	{Name:embed-certs-778109 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:embed-certs-778109 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.90 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 22:25:19.957185   66662 iso.go:125] acquiring lock: {Name:mk05a37c5afd8d748706c016c1665e3846f7161e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 22:25:19.959908   66662 out.go:177] * Starting "embed-certs-778109" primary control-plane node in "embed-certs-778109" cluster
	I0505 22:25:19.961351   66662 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0505 22:25:19.961392   66662 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18602-11466/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0505 22:25:19.961404   66662 cache.go:56] Caching tarball of preloaded images
	I0505 22:25:19.961489   66662 preload.go:173] Found /home/jenkins/minikube-integration/18602-11466/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0505 22:25:19.961503   66662 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0505 22:25:19.961617   66662 profile.go:143] Saving config to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/embed-certs-778109/config.json ...
	I0505 22:25:19.961799   66662 start.go:360] acquireMachinesLock for embed-certs-778109: {Name:mk7f653ea73351d572a4896c6b37d2e7aa44b4ac Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 22:25:21.179771   65347 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.167:22: connect: no route to host
	I0505 22:25:24.251790   65347 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.167:22: connect: no route to host
	I0505 22:25:30.331714   65347 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.167:22: connect: no route to host
	I0505 22:25:33.403769   65347 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.167:22: connect: no route to host
	I0505 22:25:39.483759   65347 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.167:22: connect: no route to host
	I0505 22:25:42.555794   65347 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.167:22: connect: no route to host
	I0505 22:25:48.635787   65347 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.167:22: connect: no route to host
	I0505 22:25:51.707771   65347 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.167:22: connect: no route to host
	I0505 22:25:57.787771   65347 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.167:22: connect: no route to host
	I0505 22:26:00.859918   65347 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.167:22: connect: no route to host
	I0505 22:26:06.943832   65347 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.167:22: connect: no route to host
	I0505 22:26:10.011761   65347 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.167:22: connect: no route to host
	I0505 22:26:16.091705   65347 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.167:22: connect: no route to host
	I0505 22:26:19.163886   65347 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.167:22: connect: no route to host
	I0505 22:26:25.243782   65347 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.167:22: connect: no route to host
	I0505 22:26:28.248218   66092 start.go:364] duration metric: took 3m13.03394319s to acquireMachinesLock for "old-k8s-version-512320"
	I0505 22:26:28.248372   66092 start.go:96] Skipping create...Using existing machine configuration
	I0505 22:26:28.248391   66092 fix.go:54] fixHost starting: 
	I0505 22:26:28.248816   66092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 22:26:28.248853   66092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 22:26:28.263819   66092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37713
	I0505 22:26:28.264251   66092 main.go:141] libmachine: () Calling .GetVersion
	I0505 22:26:28.264847   66092 main.go:141] libmachine: Using API Version  1
	I0505 22:26:28.264878   66092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 22:26:28.265268   66092 main.go:141] libmachine: () Calling .GetMachineName
	I0505 22:26:28.265483   66092 main.go:141] libmachine: (old-k8s-version-512320) Calling .DriverName
	I0505 22:26:28.265656   66092 main.go:141] libmachine: (old-k8s-version-512320) Calling .GetState
	I0505 22:26:28.267292   66092 fix.go:112] recreateIfNeeded on old-k8s-version-512320: state=Stopped err=<nil>
	I0505 22:26:28.267315   66092 main.go:141] libmachine: (old-k8s-version-512320) Calling .DriverName
	W0505 22:26:28.267460   66092 fix.go:138] unexpected machine state, will restart: <nil>
	I0505 22:26:28.269604   66092 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-512320" ...
	I0505 22:26:28.271211   66092 main.go:141] libmachine: (old-k8s-version-512320) Calling .Start
	I0505 22:26:28.271415   66092 main.go:141] libmachine: (old-k8s-version-512320) Ensuring networks are active...
	I0505 22:26:28.272123   66092 main.go:141] libmachine: (old-k8s-version-512320) Ensuring network default is active
	I0505 22:26:28.272509   66092 main.go:141] libmachine: (old-k8s-version-512320) Ensuring network mk-old-k8s-version-512320 is active
	I0505 22:26:28.272971   66092 main.go:141] libmachine: (old-k8s-version-512320) Getting domain xml...
	I0505 22:26:28.273684   66092 main.go:141] libmachine: (old-k8s-version-512320) Creating domain...
	I0505 22:26:29.488650   66092 main.go:141] libmachine: (old-k8s-version-512320) Waiting to get IP...
	I0505 22:26:29.489579   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | domain old-k8s-version-512320 has defined MAC address 52:54:00:90:4d:d4 in network mk-old-k8s-version-512320
	I0505 22:26:29.490015   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | unable to find current IP address of domain old-k8s-version-512320 in network mk-old-k8s-version-512320
	I0505 22:26:29.490073   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | I0505 22:26:29.490002   66939 retry.go:31] will retry after 201.435914ms: waiting for machine to come up
	I0505 22:26:29.693580   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | domain old-k8s-version-512320 has defined MAC address 52:54:00:90:4d:d4 in network mk-old-k8s-version-512320
	I0505 22:26:29.694212   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | unable to find current IP address of domain old-k8s-version-512320 in network mk-old-k8s-version-512320
	I0505 22:26:29.694242   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | I0505 22:26:29.694152   66939 retry.go:31] will retry after 354.654327ms: waiting for machine to come up
	I0505 22:26:30.050956   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | domain old-k8s-version-512320 has defined MAC address 52:54:00:90:4d:d4 in network mk-old-k8s-version-512320
	I0505 22:26:30.051645   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | unable to find current IP address of domain old-k8s-version-512320 in network mk-old-k8s-version-512320
	I0505 22:26:30.051675   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | I0505 22:26:30.051590   66939 retry.go:31] will retry after 348.157615ms: waiting for machine to come up
	I0505 22:26:28.245600   65347 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0505 22:26:28.245639   65347 main.go:141] libmachine: (no-preload-112135) Calling .GetMachineName
	I0505 22:26:28.245960   65347 buildroot.go:166] provisioning hostname "no-preload-112135"
	I0505 22:26:28.245973   65347 main.go:141] libmachine: (no-preload-112135) Calling .GetMachineName
	I0505 22:26:28.246186   65347 main.go:141] libmachine: (no-preload-112135) Calling .GetSSHHostname
	I0505 22:26:28.248061   65347 machine.go:97] duration metric: took 4m37.378877759s to provisionDockerMachine
	I0505 22:26:28.248102   65347 fix.go:56] duration metric: took 4m37.401028739s for fixHost
	I0505 22:26:28.248113   65347 start.go:83] releasing machines lock for "no-preload-112135", held for 4m37.401058515s
	W0505 22:26:28.248138   65347 start.go:713] error starting host: provision: host is not running
	W0505 22:26:28.248230   65347 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0505 22:26:28.248239   65347 start.go:728] Will try again in 5 seconds ...
	I0505 22:26:30.401049   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | domain old-k8s-version-512320 has defined MAC address 52:54:00:90:4d:d4 in network mk-old-k8s-version-512320
	I0505 22:26:30.401621   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | unable to find current IP address of domain old-k8s-version-512320 in network mk-old-k8s-version-512320
	I0505 22:26:30.401653   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | I0505 22:26:30.401569   66939 retry.go:31] will retry after 405.885822ms: waiting for machine to come up
	I0505 22:26:30.809036   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | domain old-k8s-version-512320 has defined MAC address 52:54:00:90:4d:d4 in network mk-old-k8s-version-512320
	I0505 22:26:30.809560   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | unable to find current IP address of domain old-k8s-version-512320 in network mk-old-k8s-version-512320
	I0505 22:26:30.809583   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | I0505 22:26:30.809510   66939 retry.go:31] will retry after 728.867565ms: waiting for machine to come up
	I0505 22:26:31.540573   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | domain old-k8s-version-512320 has defined MAC address 52:54:00:90:4d:d4 in network mk-old-k8s-version-512320
	I0505 22:26:31.541123   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | unable to find current IP address of domain old-k8s-version-512320 in network mk-old-k8s-version-512320
	I0505 22:26:31.541145   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | I0505 22:26:31.541101   66939 retry.go:31] will retry after 786.478155ms: waiting for machine to come up
	I0505 22:26:32.329752   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | domain old-k8s-version-512320 has defined MAC address 52:54:00:90:4d:d4 in network mk-old-k8s-version-512320
	I0505 22:26:32.330266   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | unable to find current IP address of domain old-k8s-version-512320 in network mk-old-k8s-version-512320
	I0505 22:26:32.330297   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | I0505 22:26:32.330229   66939 retry.go:31] will retry after 730.955556ms: waiting for machine to come up
	I0505 22:26:33.063125   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | domain old-k8s-version-512320 has defined MAC address 52:54:00:90:4d:d4 in network mk-old-k8s-version-512320
	I0505 22:26:33.063628   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | unable to find current IP address of domain old-k8s-version-512320 in network mk-old-k8s-version-512320
	I0505 22:26:33.063665   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | I0505 22:26:33.063593   66939 retry.go:31] will retry after 1.057081805s: waiting for machine to come up
	I0505 22:26:34.122007   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | domain old-k8s-version-512320 has defined MAC address 52:54:00:90:4d:d4 in network mk-old-k8s-version-512320
	I0505 22:26:34.122493   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | unable to find current IP address of domain old-k8s-version-512320 in network mk-old-k8s-version-512320
	I0505 22:26:34.122514   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | I0505 22:26:34.122460   66939 retry.go:31] will retry after 1.731755622s: waiting for machine to come up
	I0505 22:26:33.249798   65347 start.go:360] acquireMachinesLock for no-preload-112135: {Name:mk7f653ea73351d572a4896c6b37d2e7aa44b4ac Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 22:26:35.856408   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | domain old-k8s-version-512320 has defined MAC address 52:54:00:90:4d:d4 in network mk-old-k8s-version-512320
	I0505 22:26:35.856911   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | unable to find current IP address of domain old-k8s-version-512320 in network mk-old-k8s-version-512320
	I0505 22:26:35.856942   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | I0505 22:26:35.856854   66939 retry.go:31] will retry after 1.482847812s: waiting for machine to come up
	I0505 22:26:37.341852   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | domain old-k8s-version-512320 has defined MAC address 52:54:00:90:4d:d4 in network mk-old-k8s-version-512320
	I0505 22:26:37.342267   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | unable to find current IP address of domain old-k8s-version-512320 in network mk-old-k8s-version-512320
	I0505 22:26:37.342290   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | I0505 22:26:37.342227   66939 retry.go:31] will retry after 2.867310653s: waiting for machine to come up
	I0505 22:26:40.211528   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | domain old-k8s-version-512320 has defined MAC address 52:54:00:90:4d:d4 in network mk-old-k8s-version-512320
	I0505 22:26:40.212101   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | unable to find current IP address of domain old-k8s-version-512320 in network mk-old-k8s-version-512320
	I0505 22:26:40.212128   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | I0505 22:26:40.212049   66939 retry.go:31] will retry after 3.093623542s: waiting for machine to come up
	I0505 22:26:43.307744   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | domain old-k8s-version-512320 has defined MAC address 52:54:00:90:4d:d4 in network mk-old-k8s-version-512320
	I0505 22:26:43.308331   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | unable to find current IP address of domain old-k8s-version-512320 in network mk-old-k8s-version-512320
	I0505 22:26:43.308358   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | I0505 22:26:43.308284   66939 retry.go:31] will retry after 3.409936596s: waiting for machine to come up
	I0505 22:26:48.317174   66662 start.go:364] duration metric: took 1m28.355350192s to acquireMachinesLock for "embed-certs-778109"
	I0505 22:26:48.317241   66662 start.go:96] Skipping create...Using existing machine configuration
	I0505 22:26:48.317252   66662 fix.go:54] fixHost starting: 
	I0505 22:26:48.317691   66662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 22:26:48.317741   66662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 22:26:48.337011   66662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35269
	I0505 22:26:48.337388   66662 main.go:141] libmachine: () Calling .GetVersion
	I0505 22:26:48.337814   66662 main.go:141] libmachine: Using API Version  1
	I0505 22:26:48.337833   66662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 22:26:48.338218   66662 main.go:141] libmachine: () Calling .GetMachineName
	I0505 22:26:48.338432   66662 main.go:141] libmachine: (embed-certs-778109) Calling .DriverName
	I0505 22:26:48.338594   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetState
	I0505 22:26:48.340223   66662 fix.go:112] recreateIfNeeded on embed-certs-778109: state=Stopped err=<nil>
	I0505 22:26:48.340250   66662 main.go:141] libmachine: (embed-certs-778109) Calling .DriverName
	W0505 22:26:48.340401   66662 fix.go:138] unexpected machine state, will restart: <nil>
	I0505 22:26:48.342920   66662 out.go:177] * Restarting existing kvm2 VM for "embed-certs-778109" ...
	I0505 22:26:46.721859   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | domain old-k8s-version-512320 has defined MAC address 52:54:00:90:4d:d4 in network mk-old-k8s-version-512320
	I0505 22:26:46.722340   66092 main.go:141] libmachine: (old-k8s-version-512320) Found IP for machine: 192.168.50.131
	I0505 22:26:46.722377   66092 main.go:141] libmachine: (old-k8s-version-512320) Reserving static IP address...
	I0505 22:26:46.722394   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | domain old-k8s-version-512320 has current primary IP address 192.168.50.131 and MAC address 52:54:00:90:4d:d4 in network mk-old-k8s-version-512320
	I0505 22:26:46.722757   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | found host DHCP lease matching {name: "old-k8s-version-512320", mac: "52:54:00:90:4d:d4", ip: "192.168.50.131"} in network mk-old-k8s-version-512320: {Iface:virbr2 ExpiryTime:2024-05-05 23:17:39 +0000 UTC Type:0 Mac:52:54:00:90:4d:d4 Iaid: IPaddr:192.168.50.131 Prefix:24 Hostname:old-k8s-version-512320 Clientid:01:52:54:00:90:4d:d4}
	I0505 22:26:46.722787   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | skip adding static IP to network mk-old-k8s-version-512320 - found existing host DHCP lease matching {name: "old-k8s-version-512320", mac: "52:54:00:90:4d:d4", ip: "192.168.50.131"}
	I0505 22:26:46.722800   66092 main.go:141] libmachine: (old-k8s-version-512320) Reserved static IP address: 192.168.50.131
	I0505 22:26:46.722817   66092 main.go:141] libmachine: (old-k8s-version-512320) Waiting for SSH to be available...
	I0505 22:26:46.722831   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | Getting to WaitForSSH function...
	I0505 22:26:46.724848   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | domain old-k8s-version-512320 has defined MAC address 52:54:00:90:4d:d4 in network mk-old-k8s-version-512320
	I0505 22:26:46.725129   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:4d:d4", ip: ""} in network mk-old-k8s-version-512320: {Iface:virbr2 ExpiryTime:2024-05-05 23:17:39 +0000 UTC Type:0 Mac:52:54:00:90:4d:d4 Iaid: IPaddr:192.168.50.131 Prefix:24 Hostname:old-k8s-version-512320 Clientid:01:52:54:00:90:4d:d4}
	I0505 22:26:46.725159   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | domain old-k8s-version-512320 has defined IP address 192.168.50.131 and MAC address 52:54:00:90:4d:d4 in network mk-old-k8s-version-512320
	I0505 22:26:46.725268   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | Using SSH client type: external
	I0505 22:26:46.725293   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | Using SSH private key: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/old-k8s-version-512320/id_rsa (-rw-------)
	I0505 22:26:46.725324   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.131 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18602-11466/.minikube/machines/old-k8s-version-512320/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0505 22:26:46.725342   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | About to run SSH command:
	I0505 22:26:46.725353   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | exit 0
	I0505 22:26:46.851851   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | SSH cmd err, output: <nil>: 
	I0505 22:26:46.852150   66092 main.go:141] libmachine: (old-k8s-version-512320) Calling .GetConfigRaw
	I0505 22:26:46.852768   66092 main.go:141] libmachine: (old-k8s-version-512320) Calling .GetIP
	I0505 22:26:46.855325   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | domain old-k8s-version-512320 has defined MAC address 52:54:00:90:4d:d4 in network mk-old-k8s-version-512320
	I0505 22:26:46.855670   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:4d:d4", ip: ""} in network mk-old-k8s-version-512320: {Iface:virbr2 ExpiryTime:2024-05-05 23:17:39 +0000 UTC Type:0 Mac:52:54:00:90:4d:d4 Iaid: IPaddr:192.168.50.131 Prefix:24 Hostname:old-k8s-version-512320 Clientid:01:52:54:00:90:4d:d4}
	I0505 22:26:46.855697   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | domain old-k8s-version-512320 has defined IP address 192.168.50.131 and MAC address 52:54:00:90:4d:d4 in network mk-old-k8s-version-512320
	I0505 22:26:46.855946   66092 profile.go:143] Saving config to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/old-k8s-version-512320/config.json ...
	I0505 22:26:46.856140   66092 machine.go:94] provisionDockerMachine start ...
	I0505 22:26:46.856156   66092 main.go:141] libmachine: (old-k8s-version-512320) Calling .DriverName
	I0505 22:26:46.856383   66092 main.go:141] libmachine: (old-k8s-version-512320) Calling .GetSSHHostname
	I0505 22:26:46.858288   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | domain old-k8s-version-512320 has defined MAC address 52:54:00:90:4d:d4 in network mk-old-k8s-version-512320
	I0505 22:26:46.858659   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:4d:d4", ip: ""} in network mk-old-k8s-version-512320: {Iface:virbr2 ExpiryTime:2024-05-05 23:17:39 +0000 UTC Type:0 Mac:52:54:00:90:4d:d4 Iaid: IPaddr:192.168.50.131 Prefix:24 Hostname:old-k8s-version-512320 Clientid:01:52:54:00:90:4d:d4}
	I0505 22:26:46.858689   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | domain old-k8s-version-512320 has defined IP address 192.168.50.131 and MAC address 52:54:00:90:4d:d4 in network mk-old-k8s-version-512320
	I0505 22:26:46.858833   66092 main.go:141] libmachine: (old-k8s-version-512320) Calling .GetSSHPort
	I0505 22:26:46.858976   66092 main.go:141] libmachine: (old-k8s-version-512320) Calling .GetSSHKeyPath
	I0505 22:26:46.859123   66092 main.go:141] libmachine: (old-k8s-version-512320) Calling .GetSSHKeyPath
	I0505 22:26:46.859262   66092 main.go:141] libmachine: (old-k8s-version-512320) Calling .GetSSHUsername
	I0505 22:26:46.859410   66092 main.go:141] libmachine: Using SSH client type: native
	I0505 22:26:46.859627   66092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.131 22 <nil> <nil>}
	I0505 22:26:46.859641   66092 main.go:141] libmachine: About to run SSH command:
	hostname
	I0505 22:26:46.972960   66092 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0505 22:26:46.973011   66092 main.go:141] libmachine: (old-k8s-version-512320) Calling .GetMachineName
	I0505 22:26:46.973277   66092 buildroot.go:166] provisioning hostname "old-k8s-version-512320"
	I0505 22:26:46.973302   66092 main.go:141] libmachine: (old-k8s-version-512320) Calling .GetMachineName
	I0505 22:26:46.973513   66092 main.go:141] libmachine: (old-k8s-version-512320) Calling .GetSSHHostname
	I0505 22:26:46.976207   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | domain old-k8s-version-512320 has defined MAC address 52:54:00:90:4d:d4 in network mk-old-k8s-version-512320
	I0505 22:26:46.976565   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:4d:d4", ip: ""} in network mk-old-k8s-version-512320: {Iface:virbr2 ExpiryTime:2024-05-05 23:17:39 +0000 UTC Type:0 Mac:52:54:00:90:4d:d4 Iaid: IPaddr:192.168.50.131 Prefix:24 Hostname:old-k8s-version-512320 Clientid:01:52:54:00:90:4d:d4}
	I0505 22:26:46.976607   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | domain old-k8s-version-512320 has defined IP address 192.168.50.131 and MAC address 52:54:00:90:4d:d4 in network mk-old-k8s-version-512320
	I0505 22:26:46.976742   66092 main.go:141] libmachine: (old-k8s-version-512320) Calling .GetSSHPort
	I0505 22:26:46.976909   66092 main.go:141] libmachine: (old-k8s-version-512320) Calling .GetSSHKeyPath
	I0505 22:26:46.977056   66092 main.go:141] libmachine: (old-k8s-version-512320) Calling .GetSSHKeyPath
	I0505 22:26:46.977159   66092 main.go:141] libmachine: (old-k8s-version-512320) Calling .GetSSHUsername
	I0505 22:26:46.977306   66092 main.go:141] libmachine: Using SSH client type: native
	I0505 22:26:46.977504   66092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.131 22 <nil> <nil>}
	I0505 22:26:46.977523   66092 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-512320 && echo "old-k8s-version-512320" | sudo tee /etc/hostname
	I0505 22:26:47.104382   66092 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-512320
	
	I0505 22:26:47.104415   66092 main.go:141] libmachine: (old-k8s-version-512320) Calling .GetSSHHostname
	I0505 22:26:47.107455   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | domain old-k8s-version-512320 has defined MAC address 52:54:00:90:4d:d4 in network mk-old-k8s-version-512320
	I0505 22:26:47.107819   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:4d:d4", ip: ""} in network mk-old-k8s-version-512320: {Iface:virbr2 ExpiryTime:2024-05-05 23:17:39 +0000 UTC Type:0 Mac:52:54:00:90:4d:d4 Iaid: IPaddr:192.168.50.131 Prefix:24 Hostname:old-k8s-version-512320 Clientid:01:52:54:00:90:4d:d4}
	I0505 22:26:47.107864   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | domain old-k8s-version-512320 has defined IP address 192.168.50.131 and MAC address 52:54:00:90:4d:d4 in network mk-old-k8s-version-512320
	I0505 22:26:47.108025   66092 main.go:141] libmachine: (old-k8s-version-512320) Calling .GetSSHPort
	I0505 22:26:47.108251   66092 main.go:141] libmachine: (old-k8s-version-512320) Calling .GetSSHKeyPath
	I0505 22:26:47.108431   66092 main.go:141] libmachine: (old-k8s-version-512320) Calling .GetSSHKeyPath
	I0505 22:26:47.108616   66092 main.go:141] libmachine: (old-k8s-version-512320) Calling .GetSSHUsername
	I0505 22:26:47.108825   66092 main.go:141] libmachine: Using SSH client type: native
	I0505 22:26:47.108976   66092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.131 22 <nil> <nil>}
	I0505 22:26:47.108992   66092 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-512320' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-512320/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-512320' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0505 22:26:47.230920   66092 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0505 22:26:47.230975   66092 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18602-11466/.minikube CaCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18602-11466/.minikube}
	I0505 22:26:47.231032   66092 buildroot.go:174] setting up certificates
	I0505 22:26:47.231043   66092 provision.go:84] configureAuth start
	I0505 22:26:47.231056   66092 main.go:141] libmachine: (old-k8s-version-512320) Calling .GetMachineName
	I0505 22:26:47.231373   66092 main.go:141] libmachine: (old-k8s-version-512320) Calling .GetIP
	I0505 22:26:47.234418   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | domain old-k8s-version-512320 has defined MAC address 52:54:00:90:4d:d4 in network mk-old-k8s-version-512320
	I0505 22:26:47.234781   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:4d:d4", ip: ""} in network mk-old-k8s-version-512320: {Iface:virbr2 ExpiryTime:2024-05-05 23:17:39 +0000 UTC Type:0 Mac:52:54:00:90:4d:d4 Iaid: IPaddr:192.168.50.131 Prefix:24 Hostname:old-k8s-version-512320 Clientid:01:52:54:00:90:4d:d4}
	I0505 22:26:47.234829   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | domain old-k8s-version-512320 has defined IP address 192.168.50.131 and MAC address 52:54:00:90:4d:d4 in network mk-old-k8s-version-512320
	I0505 22:26:47.235012   66092 main.go:141] libmachine: (old-k8s-version-512320) Calling .GetSSHHostname
	I0505 22:26:47.237014   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | domain old-k8s-version-512320 has defined MAC address 52:54:00:90:4d:d4 in network mk-old-k8s-version-512320
	I0505 22:26:47.237296   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:4d:d4", ip: ""} in network mk-old-k8s-version-512320: {Iface:virbr2 ExpiryTime:2024-05-05 23:17:39 +0000 UTC Type:0 Mac:52:54:00:90:4d:d4 Iaid: IPaddr:192.168.50.131 Prefix:24 Hostname:old-k8s-version-512320 Clientid:01:52:54:00:90:4d:d4}
	I0505 22:26:47.237385   66092 provision.go:143] copyHostCerts
	I0505 22:26:47.237396   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | domain old-k8s-version-512320 has defined IP address 192.168.50.131 and MAC address 52:54:00:90:4d:d4 in network mk-old-k8s-version-512320
	I0505 22:26:47.237450   66092 exec_runner.go:144] found /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem, removing ...
	I0505 22:26:47.237467   66092 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem
	I0505 22:26:47.237533   66092 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem (1078 bytes)
	I0505 22:26:47.237635   66092 exec_runner.go:144] found /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem, removing ...
	I0505 22:26:47.237643   66092 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem
	I0505 22:26:47.237678   66092 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem (1123 bytes)
	I0505 22:26:47.237773   66092 exec_runner.go:144] found /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem, removing ...
	I0505 22:26:47.237782   66092 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem
	I0505 22:26:47.237813   66092 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem (1675 bytes)
	I0505 22:26:47.237885   66092 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-512320 san=[127.0.0.1 192.168.50.131 localhost minikube old-k8s-version-512320]
	I0505 22:26:47.610414   66092 provision.go:177] copyRemoteCerts
	I0505 22:26:47.610478   66092 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0505 22:26:47.610505   66092 main.go:141] libmachine: (old-k8s-version-512320) Calling .GetSSHHostname
	I0505 22:26:47.613107   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | domain old-k8s-version-512320 has defined MAC address 52:54:00:90:4d:d4 in network mk-old-k8s-version-512320
	I0505 22:26:47.613391   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:4d:d4", ip: ""} in network mk-old-k8s-version-512320: {Iface:virbr2 ExpiryTime:2024-05-05 23:17:39 +0000 UTC Type:0 Mac:52:54:00:90:4d:d4 Iaid: IPaddr:192.168.50.131 Prefix:24 Hostname:old-k8s-version-512320 Clientid:01:52:54:00:90:4d:d4}
	I0505 22:26:47.613419   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | domain old-k8s-version-512320 has defined IP address 192.168.50.131 and MAC address 52:54:00:90:4d:d4 in network mk-old-k8s-version-512320
	I0505 22:26:47.613593   66092 main.go:141] libmachine: (old-k8s-version-512320) Calling .GetSSHPort
	I0505 22:26:47.613790   66092 main.go:141] libmachine: (old-k8s-version-512320) Calling .GetSSHKeyPath
	I0505 22:26:47.613924   66092 main.go:141] libmachine: (old-k8s-version-512320) Calling .GetSSHUsername
	I0505 22:26:47.614030   66092 sshutil.go:53] new ssh client: &{IP:192.168.50.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/old-k8s-version-512320/id_rsa Username:docker}
	I0505 22:26:47.704283   66092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0505 22:26:47.731571   66092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0505 22:26:47.758141   66092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0505 22:26:47.783365   66092 provision.go:87] duration metric: took 552.312462ms to configureAuth
	I0505 22:26:47.783389   66092 buildroot.go:189] setting minikube options for container-runtime
	I0505 22:26:47.783594   66092 config.go:182] Loaded profile config "old-k8s-version-512320": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0505 22:26:47.783668   66092 main.go:141] libmachine: (old-k8s-version-512320) Calling .GetSSHHostname
	I0505 22:26:47.786136   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | domain old-k8s-version-512320 has defined MAC address 52:54:00:90:4d:d4 in network mk-old-k8s-version-512320
	I0505 22:26:47.786522   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:4d:d4", ip: ""} in network mk-old-k8s-version-512320: {Iface:virbr2 ExpiryTime:2024-05-05 23:17:39 +0000 UTC Type:0 Mac:52:54:00:90:4d:d4 Iaid: IPaddr:192.168.50.131 Prefix:24 Hostname:old-k8s-version-512320 Clientid:01:52:54:00:90:4d:d4}
	I0505 22:26:47.786551   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | domain old-k8s-version-512320 has defined IP address 192.168.50.131 and MAC address 52:54:00:90:4d:d4 in network mk-old-k8s-version-512320
	I0505 22:26:47.786730   66092 main.go:141] libmachine: (old-k8s-version-512320) Calling .GetSSHPort
	I0505 22:26:47.786932   66092 main.go:141] libmachine: (old-k8s-version-512320) Calling .GetSSHKeyPath
	I0505 22:26:47.787095   66092 main.go:141] libmachine: (old-k8s-version-512320) Calling .GetSSHKeyPath
	I0505 22:26:47.787234   66092 main.go:141] libmachine: (old-k8s-version-512320) Calling .GetSSHUsername
	I0505 22:26:47.787372   66092 main.go:141] libmachine: Using SSH client type: native
	I0505 22:26:47.787555   66092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.131 22 <nil> <nil>}
	I0505 22:26:47.787572   66092 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0505 22:26:48.066745   66092 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0505 22:26:48.066771   66092 machine.go:97] duration metric: took 1.21061884s to provisionDockerMachine
	I0505 22:26:48.066785   66092 start.go:293] postStartSetup for "old-k8s-version-512320" (driver="kvm2")
	I0505 22:26:48.066799   66092 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0505 22:26:48.066819   66092 main.go:141] libmachine: (old-k8s-version-512320) Calling .DriverName
	I0505 22:26:48.067233   66092 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0505 22:26:48.067260   66092 main.go:141] libmachine: (old-k8s-version-512320) Calling .GetSSHHostname
	I0505 22:26:48.069877   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | domain old-k8s-version-512320 has defined MAC address 52:54:00:90:4d:d4 in network mk-old-k8s-version-512320
	I0505 22:26:48.070222   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:4d:d4", ip: ""} in network mk-old-k8s-version-512320: {Iface:virbr2 ExpiryTime:2024-05-05 23:17:39 +0000 UTC Type:0 Mac:52:54:00:90:4d:d4 Iaid: IPaddr:192.168.50.131 Prefix:24 Hostname:old-k8s-version-512320 Clientid:01:52:54:00:90:4d:d4}
	I0505 22:26:48.070242   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | domain old-k8s-version-512320 has defined IP address 192.168.50.131 and MAC address 52:54:00:90:4d:d4 in network mk-old-k8s-version-512320
	I0505 22:26:48.070448   66092 main.go:141] libmachine: (old-k8s-version-512320) Calling .GetSSHPort
	I0505 22:26:48.070625   66092 main.go:141] libmachine: (old-k8s-version-512320) Calling .GetSSHKeyPath
	I0505 22:26:48.070758   66092 main.go:141] libmachine: (old-k8s-version-512320) Calling .GetSSHUsername
	I0505 22:26:48.070887   66092 sshutil.go:53] new ssh client: &{IP:192.168.50.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/old-k8s-version-512320/id_rsa Username:docker}
	I0505 22:26:48.160391   66092 ssh_runner.go:195] Run: cat /etc/os-release
	I0505 22:26:48.165414   66092 info.go:137] Remote host: Buildroot 2023.02.9
	I0505 22:26:48.165439   66092 filesync.go:126] Scanning /home/jenkins/minikube-integration/18602-11466/.minikube/addons for local assets ...
	I0505 22:26:48.165503   66092 filesync.go:126] Scanning /home/jenkins/minikube-integration/18602-11466/.minikube/files for local assets ...
	I0505 22:26:48.165593   66092 filesync.go:149] local asset: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem -> 187982.pem in /etc/ssl/certs
	I0505 22:26:48.165683   66092 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0505 22:26:48.176921   66092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem --> /etc/ssl/certs/187982.pem (1708 bytes)
	I0505 22:26:48.202881   66092 start.go:296] duration metric: took 136.082091ms for postStartSetup
	I0505 22:26:48.202918   66092 fix.go:56] duration metric: took 19.954534903s for fixHost
	I0505 22:26:48.202938   66092 main.go:141] libmachine: (old-k8s-version-512320) Calling .GetSSHHostname
	I0505 22:26:48.205686   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | domain old-k8s-version-512320 has defined MAC address 52:54:00:90:4d:d4 in network mk-old-k8s-version-512320
	I0505 22:26:48.206004   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:4d:d4", ip: ""} in network mk-old-k8s-version-512320: {Iface:virbr2 ExpiryTime:2024-05-05 23:17:39 +0000 UTC Type:0 Mac:52:54:00:90:4d:d4 Iaid: IPaddr:192.168.50.131 Prefix:24 Hostname:old-k8s-version-512320 Clientid:01:52:54:00:90:4d:d4}
	I0505 22:26:48.206029   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | domain old-k8s-version-512320 has defined IP address 192.168.50.131 and MAC address 52:54:00:90:4d:d4 in network mk-old-k8s-version-512320
	I0505 22:26:48.206204   66092 main.go:141] libmachine: (old-k8s-version-512320) Calling .GetSSHPort
	I0505 22:26:48.206432   66092 main.go:141] libmachine: (old-k8s-version-512320) Calling .GetSSHKeyPath
	I0505 22:26:48.206600   66092 main.go:141] libmachine: (old-k8s-version-512320) Calling .GetSSHKeyPath
	I0505 22:26:48.206726   66092 main.go:141] libmachine: (old-k8s-version-512320) Calling .GetSSHUsername
	I0505 22:26:48.206865   66092 main.go:141] libmachine: Using SSH client type: native
	I0505 22:26:48.207076   66092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.131 22 <nil> <nil>}
	I0505 22:26:48.207091   66092 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0505 22:26:48.317029   66092 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714948008.285457780
	
	I0505 22:26:48.317053   66092 fix.go:216] guest clock: 1714948008.285457780
	I0505 22:26:48.317062   66092 fix.go:229] Guest: 2024-05-05 22:26:48.28545778 +0000 UTC Remote: 2024-05-05 22:26:48.202921989 +0000 UTC m=+213.145956521 (delta=82.535791ms)
	I0505 22:26:48.317096   66092 fix.go:200] guest clock delta is within tolerance: 82.535791ms
	I0505 22:26:48.317100   66092 start.go:83] releasing machines lock for "old-k8s-version-512320", held for 20.068754317s
	I0505 22:26:48.317129   66092 main.go:141] libmachine: (old-k8s-version-512320) Calling .DriverName
	I0505 22:26:48.317416   66092 main.go:141] libmachine: (old-k8s-version-512320) Calling .GetIP
	I0505 22:26:48.320059   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | domain old-k8s-version-512320 has defined MAC address 52:54:00:90:4d:d4 in network mk-old-k8s-version-512320
	I0505 22:26:48.320429   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:4d:d4", ip: ""} in network mk-old-k8s-version-512320: {Iface:virbr2 ExpiryTime:2024-05-05 23:17:39 +0000 UTC Type:0 Mac:52:54:00:90:4d:d4 Iaid: IPaddr:192.168.50.131 Prefix:24 Hostname:old-k8s-version-512320 Clientid:01:52:54:00:90:4d:d4}
	I0505 22:26:48.320458   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | domain old-k8s-version-512320 has defined IP address 192.168.50.131 and MAC address 52:54:00:90:4d:d4 in network mk-old-k8s-version-512320
	I0505 22:26:48.320613   66092 main.go:141] libmachine: (old-k8s-version-512320) Calling .DriverName
	I0505 22:26:48.321086   66092 main.go:141] libmachine: (old-k8s-version-512320) Calling .DriverName
	I0505 22:26:48.321255   66092 main.go:141] libmachine: (old-k8s-version-512320) Calling .DriverName
	I0505 22:26:48.321317   66092 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0505 22:26:48.321370   66092 main.go:141] libmachine: (old-k8s-version-512320) Calling .GetSSHHostname
	I0505 22:26:48.321479   66092 ssh_runner.go:195] Run: cat /version.json
	I0505 22:26:48.321509   66092 main.go:141] libmachine: (old-k8s-version-512320) Calling .GetSSHHostname
	I0505 22:26:48.323831   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | domain old-k8s-version-512320 has defined MAC address 52:54:00:90:4d:d4 in network mk-old-k8s-version-512320
	I0505 22:26:48.324149   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:4d:d4", ip: ""} in network mk-old-k8s-version-512320: {Iface:virbr2 ExpiryTime:2024-05-05 23:17:39 +0000 UTC Type:0 Mac:52:54:00:90:4d:d4 Iaid: IPaddr:192.168.50.131 Prefix:24 Hostname:old-k8s-version-512320 Clientid:01:52:54:00:90:4d:d4}
	I0505 22:26:48.324176   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | domain old-k8s-version-512320 has defined MAC address 52:54:00:90:4d:d4 in network mk-old-k8s-version-512320
	I0505 22:26:48.324204   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | domain old-k8s-version-512320 has defined IP address 192.168.50.131 and MAC address 52:54:00:90:4d:d4 in network mk-old-k8s-version-512320
	I0505 22:26:48.324359   66092 main.go:141] libmachine: (old-k8s-version-512320) Calling .GetSSHPort
	I0505 22:26:48.324548   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:4d:d4", ip: ""} in network mk-old-k8s-version-512320: {Iface:virbr2 ExpiryTime:2024-05-05 23:17:39 +0000 UTC Type:0 Mac:52:54:00:90:4d:d4 Iaid: IPaddr:192.168.50.131 Prefix:24 Hostname:old-k8s-version-512320 Clientid:01:52:54:00:90:4d:d4}
	I0505 22:26:48.324551   66092 main.go:141] libmachine: (old-k8s-version-512320) Calling .GetSSHKeyPath
	I0505 22:26:48.324576   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | domain old-k8s-version-512320 has defined IP address 192.168.50.131 and MAC address 52:54:00:90:4d:d4 in network mk-old-k8s-version-512320
	I0505 22:26:48.324737   66092 main.go:141] libmachine: (old-k8s-version-512320) Calling .GetSSHPort
	I0505 22:26:48.324746   66092 main.go:141] libmachine: (old-k8s-version-512320) Calling .GetSSHUsername
	I0505 22:26:48.324935   66092 main.go:141] libmachine: (old-k8s-version-512320) Calling .GetSSHKeyPath
	I0505 22:26:48.324932   66092 sshutil.go:53] new ssh client: &{IP:192.168.50.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/old-k8s-version-512320/id_rsa Username:docker}
	I0505 22:26:48.325073   66092 main.go:141] libmachine: (old-k8s-version-512320) Calling .GetSSHUsername
	I0505 22:26:48.325212   66092 sshutil.go:53] new ssh client: &{IP:192.168.50.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/old-k8s-version-512320/id_rsa Username:docker}
	I0505 22:26:48.409070   66092 ssh_runner.go:195] Run: systemctl --version
	I0505 22:26:48.435670   66092 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0505 22:26:48.585352   66092 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0505 22:26:48.592237   66092 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0505 22:26:48.592356   66092 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0505 22:26:48.610621   66092 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0505 22:26:48.610647   66092 start.go:494] detecting cgroup driver to use...
	I0505 22:26:48.610716   66092 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0505 22:26:48.631097   66092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0505 22:26:48.647541   66092 docker.go:217] disabling cri-docker service (if available) ...
	I0505 22:26:48.647625   66092 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0505 22:26:48.664600   66092 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0505 22:26:48.680369   66092 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0505 22:26:48.797538   66092 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0505 22:26:48.944748   66092 docker.go:233] disabling docker service ...
	I0505 22:26:48.944820   66092 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0505 22:26:48.966910   66092 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0505 22:26:48.984803   66092 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0505 22:26:49.135212   66092 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0505 22:26:49.255334   66092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0505 22:26:49.272352   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0505 22:26:49.297528   66092 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0505 22:26:49.297602   66092 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 22:26:49.310767   66092 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0505 22:26:49.310838   66092 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 22:26:49.323532   66092 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 22:26:49.335542   66092 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 22:26:49.347146   66092 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0505 22:26:49.359165   66092 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0505 22:26:49.369614   66092 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0505 22:26:49.369676   66092 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0505 22:26:49.386199   66092 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0505 22:26:49.398535   66092 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 22:26:49.520166   66092 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0505 22:26:49.675317   66092 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0505 22:26:49.675417   66092 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0505 22:26:49.680696   66092 start.go:562] Will wait 60s for crictl version
	I0505 22:26:49.680752   66092 ssh_runner.go:195] Run: which crictl
	I0505 22:26:49.685137   66092 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0505 22:26:49.729702   66092 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0505 22:26:49.729809   66092 ssh_runner.go:195] Run: crio --version
	I0505 22:26:49.764070   66092 ssh_runner.go:195] Run: crio --version
	I0505 22:26:49.801349   66092 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0505 22:26:48.344412   66662 main.go:141] libmachine: (embed-certs-778109) Calling .Start
	I0505 22:26:48.344625   66662 main.go:141] libmachine: (embed-certs-778109) Ensuring networks are active...
	I0505 22:26:48.345315   66662 main.go:141] libmachine: (embed-certs-778109) Ensuring network default is active
	I0505 22:26:48.345731   66662 main.go:141] libmachine: (embed-certs-778109) Ensuring network mk-embed-certs-778109 is active
	I0505 22:26:48.346082   66662 main.go:141] libmachine: (embed-certs-778109) Getting domain xml...
	I0505 22:26:48.346730   66662 main.go:141] libmachine: (embed-certs-778109) Creating domain...
	I0505 22:26:49.584271   66662 main.go:141] libmachine: (embed-certs-778109) Waiting to get IP...
	I0505 22:26:49.585267   66662 main.go:141] libmachine: (embed-certs-778109) DBG | domain embed-certs-778109 has defined MAC address 52:54:00:b4:76:d0 in network mk-embed-certs-778109
	I0505 22:26:49.585733   66662 main.go:141] libmachine: (embed-certs-778109) DBG | unable to find current IP address of domain embed-certs-778109 in network mk-embed-certs-778109
	I0505 22:26:49.585787   66662 main.go:141] libmachine: (embed-certs-778109) DBG | I0505 22:26:49.585687   67071 retry.go:31] will retry after 209.276645ms: waiting for machine to come up
	I0505 22:26:49.796416   66662 main.go:141] libmachine: (embed-certs-778109) DBG | domain embed-certs-778109 has defined MAC address 52:54:00:b4:76:d0 in network mk-embed-certs-778109
	I0505 22:26:49.797181   66662 main.go:141] libmachine: (embed-certs-778109) DBG | unable to find current IP address of domain embed-certs-778109 in network mk-embed-certs-778109
	I0505 22:26:49.797213   66662 main.go:141] libmachine: (embed-certs-778109) DBG | I0505 22:26:49.797098   67071 retry.go:31] will retry after 366.530743ms: waiting for machine to come up
	I0505 22:26:49.803058   66092 main.go:141] libmachine: (old-k8s-version-512320) Calling .GetIP
	I0505 22:26:49.806022   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | domain old-k8s-version-512320 has defined MAC address 52:54:00:90:4d:d4 in network mk-old-k8s-version-512320
	I0505 22:26:49.806447   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:4d:d4", ip: ""} in network mk-old-k8s-version-512320: {Iface:virbr2 ExpiryTime:2024-05-05 23:17:39 +0000 UTC Type:0 Mac:52:54:00:90:4d:d4 Iaid: IPaddr:192.168.50.131 Prefix:24 Hostname:old-k8s-version-512320 Clientid:01:52:54:00:90:4d:d4}
	I0505 22:26:49.806479   66092 main.go:141] libmachine: (old-k8s-version-512320) DBG | domain old-k8s-version-512320 has defined IP address 192.168.50.131 and MAC address 52:54:00:90:4d:d4 in network mk-old-k8s-version-512320
	I0505 22:26:49.806637   66092 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0505 22:26:49.811656   66092 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0505 22:26:49.826728   66092 kubeadm.go:877] updating cluster {Name:old-k8s-version-512320 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-512320 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.131 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0505 22:26:49.826878   66092 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0505 22:26:49.826950   66092 ssh_runner.go:195] Run: sudo crictl images --output json
	I0505 22:26:49.880058   66092 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0505 22:26:49.880146   66092 ssh_runner.go:195] Run: which lz4
	I0505 22:26:49.884888   66092 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0505 22:26:49.889925   66092 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0505 22:26:49.889957   66092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0505 22:26:50.165840   66662 main.go:141] libmachine: (embed-certs-778109) DBG | domain embed-certs-778109 has defined MAC address 52:54:00:b4:76:d0 in network mk-embed-certs-778109
	I0505 22:26:50.166322   66662 main.go:141] libmachine: (embed-certs-778109) DBG | unable to find current IP address of domain embed-certs-778109 in network mk-embed-certs-778109
	I0505 22:26:50.166353   66662 main.go:141] libmachine: (embed-certs-778109) DBG | I0505 22:26:50.166279   67071 retry.go:31] will retry after 478.134327ms: waiting for machine to come up
	I0505 22:26:50.645856   66662 main.go:141] libmachine: (embed-certs-778109) DBG | domain embed-certs-778109 has defined MAC address 52:54:00:b4:76:d0 in network mk-embed-certs-778109
	I0505 22:26:50.646479   66662 main.go:141] libmachine: (embed-certs-778109) DBG | unable to find current IP address of domain embed-certs-778109 in network mk-embed-certs-778109
	I0505 22:26:50.646513   66662 main.go:141] libmachine: (embed-certs-778109) DBG | I0505 22:26:50.646431   67071 retry.go:31] will retry after 452.250893ms: waiting for machine to come up
	I0505 22:26:51.100054   66662 main.go:141] libmachine: (embed-certs-778109) DBG | domain embed-certs-778109 has defined MAC address 52:54:00:b4:76:d0 in network mk-embed-certs-778109
	I0505 22:26:51.100619   66662 main.go:141] libmachine: (embed-certs-778109) DBG | unable to find current IP address of domain embed-certs-778109 in network mk-embed-certs-778109
	I0505 22:26:51.100650   66662 main.go:141] libmachine: (embed-certs-778109) DBG | I0505 22:26:51.100581   67071 retry.go:31] will retry after 735.162766ms: waiting for machine to come up
	I0505 22:26:51.837021   66662 main.go:141] libmachine: (embed-certs-778109) DBG | domain embed-certs-778109 has defined MAC address 52:54:00:b4:76:d0 in network mk-embed-certs-778109
	I0505 22:26:51.837678   66662 main.go:141] libmachine: (embed-certs-778109) DBG | unable to find current IP address of domain embed-certs-778109 in network mk-embed-certs-778109
	I0505 22:26:51.837708   66662 main.go:141] libmachine: (embed-certs-778109) DBG | I0505 22:26:51.837643   67071 retry.go:31] will retry after 624.544626ms: waiting for machine to come up
	I0505 22:26:52.463788   66662 main.go:141] libmachine: (embed-certs-778109) DBG | domain embed-certs-778109 has defined MAC address 52:54:00:b4:76:d0 in network mk-embed-certs-778109
	I0505 22:26:52.464314   66662 main.go:141] libmachine: (embed-certs-778109) DBG | unable to find current IP address of domain embed-certs-778109 in network mk-embed-certs-778109
	I0505 22:26:52.464339   66662 main.go:141] libmachine: (embed-certs-778109) DBG | I0505 22:26:52.464267   67071 retry.go:31] will retry after 887.121079ms: waiting for machine to come up
	I0505 22:26:53.352829   66662 main.go:141] libmachine: (embed-certs-778109) DBG | domain embed-certs-778109 has defined MAC address 52:54:00:b4:76:d0 in network mk-embed-certs-778109
	I0505 22:26:53.353422   66662 main.go:141] libmachine: (embed-certs-778109) DBG | unable to find current IP address of domain embed-certs-778109 in network mk-embed-certs-778109
	I0505 22:26:53.353453   66662 main.go:141] libmachine: (embed-certs-778109) DBG | I0505 22:26:53.353379   67071 retry.go:31] will retry after 1.289476757s: waiting for machine to come up
	I0505 22:26:54.644765   66662 main.go:141] libmachine: (embed-certs-778109) DBG | domain embed-certs-778109 has defined MAC address 52:54:00:b4:76:d0 in network mk-embed-certs-778109
	I0505 22:26:54.645216   66662 main.go:141] libmachine: (embed-certs-778109) DBG | unable to find current IP address of domain embed-certs-778109 in network mk-embed-certs-778109
	I0505 22:26:54.645241   66662 main.go:141] libmachine: (embed-certs-778109) DBG | I0505 22:26:54.645185   67071 retry.go:31] will retry after 1.301023935s: waiting for machine to come up
	I0505 22:26:51.998719   66092 crio.go:462] duration metric: took 2.113857569s to copy over tarball
	I0505 22:26:51.998788   66092 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0505 22:26:55.947755   66662 main.go:141] libmachine: (embed-certs-778109) DBG | domain embed-certs-778109 has defined MAC address 52:54:00:b4:76:d0 in network mk-embed-certs-778109
	I0505 22:26:55.948265   66662 main.go:141] libmachine: (embed-certs-778109) DBG | unable to find current IP address of domain embed-certs-778109 in network mk-embed-certs-778109
	I0505 22:26:55.948298   66662 main.go:141] libmachine: (embed-certs-778109) DBG | I0505 22:26:55.948231   67071 retry.go:31] will retry after 1.534776396s: waiting for machine to come up
	I0505 22:26:57.484378   66662 main.go:141] libmachine: (embed-certs-778109) DBG | domain embed-certs-778109 has defined MAC address 52:54:00:b4:76:d0 in network mk-embed-certs-778109
	I0505 22:26:57.484922   66662 main.go:141] libmachine: (embed-certs-778109) DBG | unable to find current IP address of domain embed-certs-778109 in network mk-embed-certs-778109
	I0505 22:26:57.484949   66662 main.go:141] libmachine: (embed-certs-778109) DBG | I0505 22:26:57.484869   67071 retry.go:31] will retry after 2.747273929s: waiting for machine to come up
	I0505 22:26:55.353343   66092 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.354519742s)
	I0505 22:26:55.353376   66092 crio.go:469] duration metric: took 3.35462909s to extract the tarball
	I0505 22:26:55.353383   66092 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0505 22:26:55.400033   66092 ssh_runner.go:195] Run: sudo crictl images --output json
	I0505 22:26:55.441178   66092 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0505 22:26:55.441211   66092 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0505 22:26:55.441269   66092 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0505 22:26:55.441286   66092 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0505 22:26:55.441310   66092 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0505 22:26:55.441346   66092 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0505 22:26:55.441485   66092 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0505 22:26:55.441526   66092 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0505 22:26:55.441532   66092 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0505 22:26:55.441626   66092 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0505 22:26:55.443315   66092 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0505 22:26:55.443369   66092 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0505 22:26:55.443423   66092 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0505 22:26:55.443315   66092 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0505 22:26:55.443598   66092 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0505 22:26:55.443648   66092 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0505 22:26:55.443704   66092 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0505 22:26:55.443864   66092 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0505 22:26:55.621197   66092 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0505 22:26:55.654692   66092 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0505 22:26:55.669094   66092 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0505 22:26:55.669142   66092 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0505 22:26:55.669213   66092 ssh_runner.go:195] Run: which crictl
	I0505 22:26:55.708060   66092 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0505 22:26:55.708193   66092 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0505 22:26:55.708220   66092 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0505 22:26:55.708247   66092 ssh_runner.go:195] Run: which crictl
	I0505 22:26:55.741805   66092 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18602-11466/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0505 22:26:55.741849   66092 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0505 22:26:55.777495   66092 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0505 22:26:55.781464   66092 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0505 22:26:55.785086   66092 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0505 22:26:55.787692   66092 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18602-11466/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0505 22:26:55.798441   66092 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0505 22:26:55.805516   66092 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0505 22:26:55.887212   66092 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0505 22:26:55.887258   66092 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0505 22:26:55.887306   66092 ssh_runner.go:195] Run: which crictl
	I0505 22:26:55.890766   66092 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0505 22:26:55.890800   66092 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0505 22:26:55.890842   66092 ssh_runner.go:195] Run: which crictl
	I0505 22:26:55.923355   66092 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0505 22:26:55.923404   66092 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0505 22:26:55.923450   66092 ssh_runner.go:195] Run: which crictl
	I0505 22:26:55.929780   66092 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0505 22:26:55.929798   66092 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0505 22:26:55.929825   66092 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0505 22:26:55.929825   66092 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0505 22:26:55.929869   66092 ssh_runner.go:195] Run: which crictl
	I0505 22:26:55.929874   66092 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0505 22:26:55.929879   66092 ssh_runner.go:195] Run: which crictl
	I0505 22:26:55.929907   66092 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0505 22:26:55.932733   66092 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0505 22:26:56.018932   66092 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0505 22:26:56.018986   66092 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18602-11466/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0505 22:26:56.019016   66092 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0505 22:26:56.019038   66092 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18602-11466/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0505 22:26:56.025832   66092 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18602-11466/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0505 22:26:56.069981   66092 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18602-11466/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0505 22:26:56.077568   66092 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18602-11466/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0505 22:26:56.422441   66092 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0505 22:26:56.579412   66092 cache_images.go:92] duration metric: took 1.138181824s to LoadCachedImages
	W0505 22:26:56.579520   66092 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18602-11466/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0505 22:26:56.579542   66092 kubeadm.go:928] updating node { 192.168.50.131 8443 v1.20.0 crio true true} ...
	I0505 22:26:56.579670   66092 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-512320 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.131
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-512320 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0505 22:26:56.579750   66092 ssh_runner.go:195] Run: crio config
	I0505 22:26:56.642315   66092 cni.go:84] Creating CNI manager for ""
	I0505 22:26:56.642347   66092 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0505 22:26:56.642363   66092 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0505 22:26:56.642389   66092 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.131 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-512320 NodeName:old-k8s-version-512320 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.131"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.131 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0505 22:26:56.642567   66092 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.131
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-512320"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.131
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.131"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0505 22:26:56.642648   66092 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0505 22:26:56.654183   66092 binaries.go:44] Found k8s binaries, skipping transfer
	I0505 22:26:56.654248   66092 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0505 22:26:56.664610   66092 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0505 22:26:56.686653   66092 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0505 22:26:56.706414   66092 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0505 22:26:56.726166   66092 ssh_runner.go:195] Run: grep 192.168.50.131	control-plane.minikube.internal$ /etc/hosts
	I0505 22:26:56.730446   66092 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.131	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0505 22:26:56.743644   66092 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 22:26:56.878372   66092 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0505 22:26:56.898147   66092 certs.go:68] Setting up /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/old-k8s-version-512320 for IP: 192.168.50.131
	I0505 22:26:56.898171   66092 certs.go:194] generating shared ca certs ...
	I0505 22:26:56.898186   66092 certs.go:226] acquiring lock for ca certs: {Name:mk9a8f97724855697074b08194c6247f8f9e5c84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 22:26:56.898376   66092 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key
	I0505 22:26:56.898431   66092 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key
	I0505 22:26:56.898450   66092 certs.go:256] generating profile certs ...
	I0505 22:26:56.898566   66092 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/old-k8s-version-512320/client.key
	I0505 22:26:56.898637   66092 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/old-k8s-version-512320/apiserver.key.37e26fff
	I0505 22:26:56.898686   66092 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/old-k8s-version-512320/proxy-client.key
	I0505 22:26:56.898840   66092 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798.pem (1338 bytes)
	W0505 22:26:56.898896   66092 certs.go:480] ignoring /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798_empty.pem, impossibly tiny 0 bytes
	I0505 22:26:56.898906   66092 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem (1675 bytes)
	I0505 22:26:56.898935   66092 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem (1078 bytes)
	I0505 22:26:56.898972   66092 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem (1123 bytes)
	I0505 22:26:56.899005   66092 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem (1675 bytes)
	I0505 22:26:56.899057   66092 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem (1708 bytes)
	I0505 22:26:56.899897   66092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0505 22:26:56.939171   66092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0505 22:26:56.972142   66092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0505 22:26:57.010994   66092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0505 22:26:57.048473   66092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/old-k8s-version-512320/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0505 22:26:57.092488   66092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/old-k8s-version-512320/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0505 22:26:57.128416   66092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/old-k8s-version-512320/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0505 22:26:57.164422   66092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/old-k8s-version-512320/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0505 22:26:57.192304   66092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798.pem --> /usr/share/ca-certificates/18798.pem (1338 bytes)
	I0505 22:26:57.218100   66092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem --> /usr/share/ca-certificates/187982.pem (1708 bytes)
	I0505 22:26:57.244793   66092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0505 22:26:57.271181   66092 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0505 22:26:57.289098   66092 ssh_runner.go:195] Run: openssl version
	I0505 22:26:57.295194   66092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0505 22:26:57.307240   66092 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0505 22:26:57.312533   66092 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  5 20:58 /usr/share/ca-certificates/minikubeCA.pem
	I0505 22:26:57.312582   66092 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0505 22:26:57.318756   66092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0505 22:26:57.330884   66092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18798.pem && ln -fs /usr/share/ca-certificates/18798.pem /etc/ssl/certs/18798.pem"
	I0505 22:26:57.342161   66092 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18798.pem
	I0505 22:26:57.346992   66092 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  5 21:11 /usr/share/ca-certificates/18798.pem
	I0505 22:26:57.347068   66092 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18798.pem
	I0505 22:26:57.353809   66092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18798.pem /etc/ssl/certs/51391683.0"
	I0505 22:26:57.366265   66092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/187982.pem && ln -fs /usr/share/ca-certificates/187982.pem /etc/ssl/certs/187982.pem"
	I0505 22:26:57.379456   66092 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/187982.pem
	I0505 22:26:57.384470   66092 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  5 21:11 /usr/share/ca-certificates/187982.pem
	I0505 22:26:57.384525   66092 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/187982.pem
	I0505 22:26:57.390470   66092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/187982.pem /etc/ssl/certs/3ec20f2e.0"
	I0505 22:26:57.402677   66092 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0505 22:26:57.407988   66092 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0505 22:26:57.414980   66092 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0505 22:26:57.421946   66092 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0505 22:26:57.429118   66092 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0505 22:26:57.438047   66092 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0505 22:26:57.446322   66092 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0505 22:26:57.452904   66092 kubeadm.go:391] StartCluster: {Name:old-k8s-version-512320 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-512320 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.131 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 22:26:57.453005   66092 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0505 22:26:57.453081   66092 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0505 22:26:57.502695   66092 cri.go:89] found id: ""
	I0505 22:26:57.502837   66092 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0505 22:26:57.514494   66092 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0505 22:26:57.514517   66092 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0505 22:26:57.514524   66092 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0505 22:26:57.514574   66092 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0505 22:26:57.525883   66092 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0505 22:26:57.527334   66092 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-512320" does not appear in /home/jenkins/minikube-integration/18602-11466/kubeconfig
	I0505 22:26:57.528384   66092 kubeconfig.go:62] /home/jenkins/minikube-integration/18602-11466/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-512320" cluster setting kubeconfig missing "old-k8s-version-512320" context setting]
	I0505 22:26:57.529685   66092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/kubeconfig: {Name:mk083e789ff55e795c8f0eb5d298a0e27ad9cdde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 22:26:57.531337   66092 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0505 22:26:57.542274   66092 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.131
	I0505 22:26:57.542304   66092 kubeadm.go:1154] stopping kube-system containers ...
	I0505 22:26:57.542317   66092 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0505 22:26:57.542369   66092 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0505 22:26:57.585085   66092 cri.go:89] found id: ""
	I0505 22:26:57.585168   66092 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0505 22:26:57.603702   66092 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0505 22:26:57.614905   66092 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0505 22:26:57.614929   66092 kubeadm.go:156] found existing configuration files:
	
	I0505 22:26:57.614980   66092 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0505 22:26:57.625442   66092 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0505 22:26:57.625510   66092 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0505 22:26:57.636367   66092 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0505 22:26:57.646933   66092 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0505 22:26:57.646983   66092 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0505 22:26:57.659198   66092 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0505 22:26:57.674905   66092 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0505 22:26:57.674979   66092 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0505 22:26:57.685908   66092 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0505 22:26:57.696131   66092 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0505 22:26:57.696187   66092 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0505 22:26:57.706872   66092 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0505 22:26:57.717790   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0505 22:26:57.853321   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0505 22:26:58.296225   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0505 22:26:58.556609   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0505 22:26:58.664141   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0505 22:26:58.771655   66092 api_server.go:52] waiting for apiserver process to appear ...
	I0505 22:26:58.771758   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:26:59.272689   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:26:59.772104   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:00.235234   66662 main.go:141] libmachine: (embed-certs-778109) DBG | domain embed-certs-778109 has defined MAC address 52:54:00:b4:76:d0 in network mk-embed-certs-778109
	I0505 22:27:00.235820   66662 main.go:141] libmachine: (embed-certs-778109) DBG | unable to find current IP address of domain embed-certs-778109 in network mk-embed-certs-778109
	I0505 22:27:00.235850   66662 main.go:141] libmachine: (embed-certs-778109) DBG | I0505 22:27:00.235746   67071 retry.go:31] will retry after 2.479330933s: waiting for machine to come up
	I0505 22:27:02.718443   66662 main.go:141] libmachine: (embed-certs-778109) DBG | domain embed-certs-778109 has defined MAC address 52:54:00:b4:76:d0 in network mk-embed-certs-778109
	I0505 22:27:02.719037   66662 main.go:141] libmachine: (embed-certs-778109) DBG | unable to find current IP address of domain embed-certs-778109 in network mk-embed-certs-778109
	I0505 22:27:02.719084   66662 main.go:141] libmachine: (embed-certs-778109) DBG | I0505 22:27:02.719002   67071 retry.go:31] will retry after 3.364174296s: waiting for machine to come up
	I0505 22:27:00.272777   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:00.772472   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:01.272591   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:01.771992   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:02.272724   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:02.772725   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:03.272363   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:03.771830   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:04.272703   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:04.772099   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:06.086531   66662 main.go:141] libmachine: (embed-certs-778109) DBG | domain embed-certs-778109 has defined MAC address 52:54:00:b4:76:d0 in network mk-embed-certs-778109
	I0505 22:27:06.086983   66662 main.go:141] libmachine: (embed-certs-778109) DBG | unable to find current IP address of domain embed-certs-778109 in network mk-embed-certs-778109
	I0505 22:27:06.087017   66662 main.go:141] libmachine: (embed-certs-778109) DBG | I0505 22:27:06.086941   67071 retry.go:31] will retry after 5.620275667s: waiting for machine to come up
	I0505 22:27:05.272665   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:05.772288   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:06.272310   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:06.772706   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:07.271869   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:07.772713   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:08.272713   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:08.772078   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:09.272485   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:09.772486   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:09.671690   61991 kubeadm.go:309] [api-check] The API server is not healthy after 4m0.002039236s
	I0505 22:27:09.671741   61991 kubeadm.go:309] 
	I0505 22:27:09.671785   61991 kubeadm.go:309] Unfortunately, an error has occurred:
	I0505 22:27:09.671854   61991 kubeadm.go:309] 	context deadline exceeded
	I0505 22:27:09.671881   61991 kubeadm.go:309] 
	I0505 22:27:09.671910   61991 kubeadm.go:309] This error is likely caused by:
	I0505 22:27:09.671944   61991 kubeadm.go:309] 	- The kubelet is not running
	I0505 22:27:09.672037   61991 kubeadm.go:309] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0505 22:27:09.672046   61991 kubeadm.go:309] 
	I0505 22:27:09.672152   61991 kubeadm.go:309] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0505 22:27:09.672208   61991 kubeadm.go:309] 	- 'systemctl status kubelet'
	I0505 22:27:09.672285   61991 kubeadm.go:309] 	- 'journalctl -xeu kubelet'
	I0505 22:27:09.672317   61991 kubeadm.go:309] 
	I0505 22:27:09.672466   61991 kubeadm.go:309] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0505 22:27:09.672584   61991 kubeadm.go:309] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0505 22:27:09.672712   61991 kubeadm.go:309] Here is one example how you may list all running Kubernetes containers by using crictl:
	I0505 22:27:09.672848   61991 kubeadm.go:309] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0505 22:27:09.672957   61991 kubeadm.go:309] 	Once you have found the failing container, you can inspect its logs with:
	I0505 22:27:09.673070   61991 kubeadm.go:309] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I0505 22:27:09.673384   61991 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0505 22:27:09.673482   61991 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0505 22:27:09.673581   61991 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0505 22:27:09.673713   61991 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.30.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.079098ms
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.002039236s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0505 22:27:09.673754   61991 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0505 22:27:13.233479   65347 start.go:364] duration metric: took 39.983600895s to acquireMachinesLock for "no-preload-112135"
	I0505 22:27:13.233566   65347 start.go:96] Skipping create...Using existing machine configuration
	I0505 22:27:13.233578   65347 fix.go:54] fixHost starting: 
	I0505 22:27:13.233990   65347 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 22:27:13.234025   65347 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 22:27:13.250988   65347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43233
	I0505 22:27:13.251518   65347 main.go:141] libmachine: () Calling .GetVersion
	I0505 22:27:13.252041   65347 main.go:141] libmachine: Using API Version  1
	I0505 22:27:13.252062   65347 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 22:27:13.252439   65347 main.go:141] libmachine: () Calling .GetMachineName
	I0505 22:27:13.252637   65347 main.go:141] libmachine: (no-preload-112135) Calling .DriverName
	I0505 22:27:13.252793   65347 main.go:141] libmachine: (no-preload-112135) Calling .GetState
	I0505 22:27:13.254501   65347 fix.go:112] recreateIfNeeded on no-preload-112135: state=Stopped err=<nil>
	I0505 22:27:13.254529   65347 main.go:141] libmachine: (no-preload-112135) Calling .DriverName
	W0505 22:27:13.254682   65347 fix.go:138] unexpected machine state, will restart: <nil>
	I0505 22:27:13.256525   65347 out.go:177] * Restarting existing kvm2 VM for "no-preload-112135" ...
	I0505 22:27:11.711176   66662 main.go:141] libmachine: (embed-certs-778109) DBG | domain embed-certs-778109 has defined MAC address 52:54:00:b4:76:d0 in network mk-embed-certs-778109
	I0505 22:27:11.711682   66662 main.go:141] libmachine: (embed-certs-778109) Found IP for machine: 192.168.72.90
	I0505 22:27:11.711702   66662 main.go:141] libmachine: (embed-certs-778109) Reserving static IP address...
	I0505 22:27:11.711717   66662 main.go:141] libmachine: (embed-certs-778109) DBG | domain embed-certs-778109 has current primary IP address 192.168.72.90 and MAC address 52:54:00:b4:76:d0 in network mk-embed-certs-778109
	I0505 22:27:11.712109   66662 main.go:141] libmachine: (embed-certs-778109) DBG | found host DHCP lease matching {name: "embed-certs-778109", mac: "52:54:00:b4:76:d0", ip: "192.168.72.90"} in network mk-embed-certs-778109: {Iface:virbr4 ExpiryTime:2024-05-05 23:27:01 +0000 UTC Type:0 Mac:52:54:00:b4:76:d0 Iaid: IPaddr:192.168.72.90 Prefix:24 Hostname:embed-certs-778109 Clientid:01:52:54:00:b4:76:d0}
	I0505 22:27:11.712139   66662 main.go:141] libmachine: (embed-certs-778109) DBG | skip adding static IP to network mk-embed-certs-778109 - found existing host DHCP lease matching {name: "embed-certs-778109", mac: "52:54:00:b4:76:d0", ip: "192.168.72.90"}
	I0505 22:27:11.712152   66662 main.go:141] libmachine: (embed-certs-778109) Reserved static IP address: 192.168.72.90
	I0505 22:27:11.712182   66662 main.go:141] libmachine: (embed-certs-778109) Waiting for SSH to be available...
	I0505 22:27:11.712192   66662 main.go:141] libmachine: (embed-certs-778109) DBG | Getting to WaitForSSH function...
	I0505 22:27:11.714002   66662 main.go:141] libmachine: (embed-certs-778109) DBG | domain embed-certs-778109 has defined MAC address 52:54:00:b4:76:d0 in network mk-embed-certs-778109
	I0505 22:27:11.714324   66662 main.go:141] libmachine: (embed-certs-778109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:76:d0", ip: ""} in network mk-embed-certs-778109: {Iface:virbr4 ExpiryTime:2024-05-05 23:27:01 +0000 UTC Type:0 Mac:52:54:00:b4:76:d0 Iaid: IPaddr:192.168.72.90 Prefix:24 Hostname:embed-certs-778109 Clientid:01:52:54:00:b4:76:d0}
	I0505 22:27:11.714353   66662 main.go:141] libmachine: (embed-certs-778109) DBG | domain embed-certs-778109 has defined IP address 192.168.72.90 and MAC address 52:54:00:b4:76:d0 in network mk-embed-certs-778109
	I0505 22:27:11.714567   66662 main.go:141] libmachine: (embed-certs-778109) DBG | Using SSH client type: external
	I0505 22:27:11.714590   66662 main.go:141] libmachine: (embed-certs-778109) DBG | Using SSH private key: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/embed-certs-778109/id_rsa (-rw-------)
	I0505 22:27:11.714626   66662 main.go:141] libmachine: (embed-certs-778109) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.90 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18602-11466/.minikube/machines/embed-certs-778109/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0505 22:27:11.714644   66662 main.go:141] libmachine: (embed-certs-778109) DBG | About to run SSH command:
	I0505 22:27:11.714675   66662 main.go:141] libmachine: (embed-certs-778109) DBG | exit 0
	I0505 22:27:11.840586   66662 main.go:141] libmachine: (embed-certs-778109) DBG | SSH cmd err, output: <nil>: 
	I0505 22:27:11.840920   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetConfigRaw
	I0505 22:27:11.841631   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetIP
	I0505 22:27:11.844377   66662 main.go:141] libmachine: (embed-certs-778109) DBG | domain embed-certs-778109 has defined MAC address 52:54:00:b4:76:d0 in network mk-embed-certs-778109
	I0505 22:27:11.844770   66662 main.go:141] libmachine: (embed-certs-778109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:76:d0", ip: ""} in network mk-embed-certs-778109: {Iface:virbr4 ExpiryTime:2024-05-05 23:27:01 +0000 UTC Type:0 Mac:52:54:00:b4:76:d0 Iaid: IPaddr:192.168.72.90 Prefix:24 Hostname:embed-certs-778109 Clientid:01:52:54:00:b4:76:d0}
	I0505 22:27:11.844800   66662 main.go:141] libmachine: (embed-certs-778109) DBG | domain embed-certs-778109 has defined IP address 192.168.72.90 and MAC address 52:54:00:b4:76:d0 in network mk-embed-certs-778109
	I0505 22:27:11.845080   66662 profile.go:143] Saving config to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/embed-certs-778109/config.json ...
	I0505 22:27:11.845270   66662 machine.go:94] provisionDockerMachine start ...
	I0505 22:27:11.845289   66662 main.go:141] libmachine: (embed-certs-778109) Calling .DriverName
	I0505 22:27:11.845510   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetSSHHostname
	I0505 22:27:11.847918   66662 main.go:141] libmachine: (embed-certs-778109) DBG | domain embed-certs-778109 has defined MAC address 52:54:00:b4:76:d0 in network mk-embed-certs-778109
	I0505 22:27:11.848287   66662 main.go:141] libmachine: (embed-certs-778109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:76:d0", ip: ""} in network mk-embed-certs-778109: {Iface:virbr4 ExpiryTime:2024-05-05 23:27:01 +0000 UTC Type:0 Mac:52:54:00:b4:76:d0 Iaid: IPaddr:192.168.72.90 Prefix:24 Hostname:embed-certs-778109 Clientid:01:52:54:00:b4:76:d0}
	I0505 22:27:11.848328   66662 main.go:141] libmachine: (embed-certs-778109) DBG | domain embed-certs-778109 has defined IP address 192.168.72.90 and MAC address 52:54:00:b4:76:d0 in network mk-embed-certs-778109
	I0505 22:27:11.848479   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetSSHPort
	I0505 22:27:11.848663   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetSSHKeyPath
	I0505 22:27:11.848841   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetSSHKeyPath
	I0505 22:27:11.848989   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetSSHUsername
	I0505 22:27:11.849159   66662 main.go:141] libmachine: Using SSH client type: native
	I0505 22:27:11.849407   66662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.90 22 <nil> <nil>}
	I0505 22:27:11.849423   66662 main.go:141] libmachine: About to run SSH command:
	hostname
	I0505 22:27:11.969666   66662 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0505 22:27:11.969704   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetMachineName
	I0505 22:27:11.969984   66662 buildroot.go:166] provisioning hostname "embed-certs-778109"
	I0505 22:27:11.970009   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetMachineName
	I0505 22:27:11.970183   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetSSHHostname
	I0505 22:27:11.972890   66662 main.go:141] libmachine: (embed-certs-778109) DBG | domain embed-certs-778109 has defined MAC address 52:54:00:b4:76:d0 in network mk-embed-certs-778109
	I0505 22:27:11.973209   66662 main.go:141] libmachine: (embed-certs-778109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:76:d0", ip: ""} in network mk-embed-certs-778109: {Iface:virbr4 ExpiryTime:2024-05-05 23:27:01 +0000 UTC Type:0 Mac:52:54:00:b4:76:d0 Iaid: IPaddr:192.168.72.90 Prefix:24 Hostname:embed-certs-778109 Clientid:01:52:54:00:b4:76:d0}
	I0505 22:27:11.973237   66662 main.go:141] libmachine: (embed-certs-778109) DBG | domain embed-certs-778109 has defined IP address 192.168.72.90 and MAC address 52:54:00:b4:76:d0 in network mk-embed-certs-778109
	I0505 22:27:11.973450   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetSSHPort
	I0505 22:27:11.973655   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetSSHKeyPath
	I0505 22:27:11.973828   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetSSHKeyPath
	I0505 22:27:11.973985   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetSSHUsername
	I0505 22:27:11.974163   66662 main.go:141] libmachine: Using SSH client type: native
	I0505 22:27:11.974378   66662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.90 22 <nil> <nil>}
	I0505 22:27:11.974397   66662 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-778109 && echo "embed-certs-778109" | sudo tee /etc/hostname
	I0505 22:27:12.106784   66662 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-778109
	
	I0505 22:27:12.106825   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetSSHHostname
	I0505 22:27:12.109651   66662 main.go:141] libmachine: (embed-certs-778109) DBG | domain embed-certs-778109 has defined MAC address 52:54:00:b4:76:d0 in network mk-embed-certs-778109
	I0505 22:27:12.110005   66662 main.go:141] libmachine: (embed-certs-778109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:76:d0", ip: ""} in network mk-embed-certs-778109: {Iface:virbr4 ExpiryTime:2024-05-05 23:27:01 +0000 UTC Type:0 Mac:52:54:00:b4:76:d0 Iaid: IPaddr:192.168.72.90 Prefix:24 Hostname:embed-certs-778109 Clientid:01:52:54:00:b4:76:d0}
	I0505 22:27:12.110054   66662 main.go:141] libmachine: (embed-certs-778109) DBG | domain embed-certs-778109 has defined IP address 192.168.72.90 and MAC address 52:54:00:b4:76:d0 in network mk-embed-certs-778109
	I0505 22:27:12.110247   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetSSHPort
	I0505 22:27:12.110462   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetSSHKeyPath
	I0505 22:27:12.110636   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetSSHKeyPath
	I0505 22:27:12.110778   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetSSHUsername
	I0505 22:27:12.110964   66662 main.go:141] libmachine: Using SSH client type: native
	I0505 22:27:12.111124   66662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.90 22 <nil> <nil>}
	I0505 22:27:12.111141   66662 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-778109' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-778109/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-778109' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0505 22:27:12.243767   66662 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0505 22:27:12.243808   66662 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18602-11466/.minikube CaCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18602-11466/.minikube}
	I0505 22:27:12.243888   66662 buildroot.go:174] setting up certificates
	I0505 22:27:12.243903   66662 provision.go:84] configureAuth start
	I0505 22:27:12.243919   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetMachineName
	I0505 22:27:12.244235   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetIP
	I0505 22:27:12.247016   66662 main.go:141] libmachine: (embed-certs-778109) DBG | domain embed-certs-778109 has defined MAC address 52:54:00:b4:76:d0 in network mk-embed-certs-778109
	I0505 22:27:12.247504   66662 main.go:141] libmachine: (embed-certs-778109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:76:d0", ip: ""} in network mk-embed-certs-778109: {Iface:virbr4 ExpiryTime:2024-05-05 23:27:01 +0000 UTC Type:0 Mac:52:54:00:b4:76:d0 Iaid: IPaddr:192.168.72.90 Prefix:24 Hostname:embed-certs-778109 Clientid:01:52:54:00:b4:76:d0}
	I0505 22:27:12.247540   66662 main.go:141] libmachine: (embed-certs-778109) DBG | domain embed-certs-778109 has defined IP address 192.168.72.90 and MAC address 52:54:00:b4:76:d0 in network mk-embed-certs-778109
	I0505 22:27:12.247761   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetSSHHostname
	I0505 22:27:12.250123   66662 main.go:141] libmachine: (embed-certs-778109) DBG | domain embed-certs-778109 has defined MAC address 52:54:00:b4:76:d0 in network mk-embed-certs-778109
	I0505 22:27:12.250533   66662 main.go:141] libmachine: (embed-certs-778109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:76:d0", ip: ""} in network mk-embed-certs-778109: {Iface:virbr4 ExpiryTime:2024-05-05 23:27:01 +0000 UTC Type:0 Mac:52:54:00:b4:76:d0 Iaid: IPaddr:192.168.72.90 Prefix:24 Hostname:embed-certs-778109 Clientid:01:52:54:00:b4:76:d0}
	I0505 22:27:12.250579   66662 main.go:141] libmachine: (embed-certs-778109) DBG | domain embed-certs-778109 has defined IP address 192.168.72.90 and MAC address 52:54:00:b4:76:d0 in network mk-embed-certs-778109
	I0505 22:27:12.250688   66662 provision.go:143] copyHostCerts
	I0505 22:27:12.250753   66662 exec_runner.go:144] found /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem, removing ...
	I0505 22:27:12.250776   66662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem
	I0505 22:27:12.250838   66662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem (1078 bytes)
	I0505 22:27:12.250932   66662 exec_runner.go:144] found /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem, removing ...
	I0505 22:27:12.250941   66662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem
	I0505 22:27:12.250967   66662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem (1123 bytes)
	I0505 22:27:12.251014   66662 exec_runner.go:144] found /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem, removing ...
	I0505 22:27:12.251022   66662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem
	I0505 22:27:12.251041   66662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem (1675 bytes)
	I0505 22:27:12.251083   66662 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem org=jenkins.embed-certs-778109 san=[127.0.0.1 192.168.72.90 embed-certs-778109 localhost minikube]
	I0505 22:27:12.470265   66662 provision.go:177] copyRemoteCerts
	I0505 22:27:12.470323   66662 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0505 22:27:12.470350   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetSSHHostname
	I0505 22:27:12.473328   66662 main.go:141] libmachine: (embed-certs-778109) DBG | domain embed-certs-778109 has defined MAC address 52:54:00:b4:76:d0 in network mk-embed-certs-778109
	I0505 22:27:12.473678   66662 main.go:141] libmachine: (embed-certs-778109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:76:d0", ip: ""} in network mk-embed-certs-778109: {Iface:virbr4 ExpiryTime:2024-05-05 23:27:01 +0000 UTC Type:0 Mac:52:54:00:b4:76:d0 Iaid: IPaddr:192.168.72.90 Prefix:24 Hostname:embed-certs-778109 Clientid:01:52:54:00:b4:76:d0}
	I0505 22:27:12.473729   66662 main.go:141] libmachine: (embed-certs-778109) DBG | domain embed-certs-778109 has defined IP address 192.168.72.90 and MAC address 52:54:00:b4:76:d0 in network mk-embed-certs-778109
	I0505 22:27:12.473883   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetSSHPort
	I0505 22:27:12.474075   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetSSHKeyPath
	I0505 22:27:12.474253   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetSSHUsername
	I0505 22:27:12.474402   66662 sshutil.go:53] new ssh client: &{IP:192.168.72.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/embed-certs-778109/id_rsa Username:docker}
	I0505 22:27:12.564447   66662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0505 22:27:12.600021   66662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0505 22:27:12.626680   66662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0505 22:27:12.656430   66662 provision.go:87] duration metric: took 412.513627ms to configureAuth
	I0505 22:27:12.656463   66662 buildroot.go:189] setting minikube options for container-runtime
	I0505 22:27:12.656683   66662 config.go:182] Loaded profile config "embed-certs-778109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 22:27:12.656753   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetSSHHostname
	I0505 22:27:12.659752   66662 main.go:141] libmachine: (embed-certs-778109) DBG | domain embed-certs-778109 has defined MAC address 52:54:00:b4:76:d0 in network mk-embed-certs-778109
	I0505 22:27:12.660137   66662 main.go:141] libmachine: (embed-certs-778109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:76:d0", ip: ""} in network mk-embed-certs-778109: {Iface:virbr4 ExpiryTime:2024-05-05 23:27:01 +0000 UTC Type:0 Mac:52:54:00:b4:76:d0 Iaid: IPaddr:192.168.72.90 Prefix:24 Hostname:embed-certs-778109 Clientid:01:52:54:00:b4:76:d0}
	I0505 22:27:12.660157   66662 main.go:141] libmachine: (embed-certs-778109) DBG | domain embed-certs-778109 has defined IP address 192.168.72.90 and MAC address 52:54:00:b4:76:d0 in network mk-embed-certs-778109
	I0505 22:27:12.660383   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetSSHPort
	I0505 22:27:12.660619   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetSSHKeyPath
	I0505 22:27:12.660768   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetSSHKeyPath
	I0505 22:27:12.660885   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetSSHUsername
	I0505 22:27:12.661073   66662 main.go:141] libmachine: Using SSH client type: native
	I0505 22:27:12.661242   66662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.90 22 <nil> <nil>}
	I0505 22:27:12.661257   66662 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0505 22:27:12.964315   66662 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0505 22:27:12.964350   66662 machine.go:97] duration metric: took 1.119066932s to provisionDockerMachine
	I0505 22:27:12.964364   66662 start.go:293] postStartSetup for "embed-certs-778109" (driver="kvm2")
	I0505 22:27:12.964377   66662 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0505 22:27:12.964398   66662 main.go:141] libmachine: (embed-certs-778109) Calling .DriverName
	I0505 22:27:12.964733   66662 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0505 22:27:12.964765   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetSSHHostname
	I0505 22:27:12.967693   66662 main.go:141] libmachine: (embed-certs-778109) DBG | domain embed-certs-778109 has defined MAC address 52:54:00:b4:76:d0 in network mk-embed-certs-778109
	I0505 22:27:12.968065   66662 main.go:141] libmachine: (embed-certs-778109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:76:d0", ip: ""} in network mk-embed-certs-778109: {Iface:virbr4 ExpiryTime:2024-05-05 23:27:01 +0000 UTC Type:0 Mac:52:54:00:b4:76:d0 Iaid: IPaddr:192.168.72.90 Prefix:24 Hostname:embed-certs-778109 Clientid:01:52:54:00:b4:76:d0}
	I0505 22:27:12.968103   66662 main.go:141] libmachine: (embed-certs-778109) DBG | domain embed-certs-778109 has defined IP address 192.168.72.90 and MAC address 52:54:00:b4:76:d0 in network mk-embed-certs-778109
	I0505 22:27:12.968374   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetSSHPort
	I0505 22:27:12.968586   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetSSHKeyPath
	I0505 22:27:12.968766   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetSSHUsername
	I0505 22:27:12.968926   66662 sshutil.go:53] new ssh client: &{IP:192.168.72.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/embed-certs-778109/id_rsa Username:docker}
	I0505 22:27:13.058294   66662 ssh_runner.go:195] Run: cat /etc/os-release
	I0505 22:27:13.064682   66662 info.go:137] Remote host: Buildroot 2023.02.9
	I0505 22:27:13.064704   66662 filesync.go:126] Scanning /home/jenkins/minikube-integration/18602-11466/.minikube/addons for local assets ...
	I0505 22:27:13.064760   66662 filesync.go:126] Scanning /home/jenkins/minikube-integration/18602-11466/.minikube/files for local assets ...
	I0505 22:27:13.064845   66662 filesync.go:149] local asset: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem -> 187982.pem in /etc/ssl/certs
	I0505 22:27:13.064946   66662 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0505 22:27:13.076981   66662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem --> /etc/ssl/certs/187982.pem (1708 bytes)
	I0505 22:27:13.111221   66662 start.go:296] duration metric: took 146.841088ms for postStartSetup
	I0505 22:27:13.111272   66662 fix.go:56] duration metric: took 24.794018529s for fixHost
	I0505 22:27:13.111298   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetSSHHostname
	I0505 22:27:13.114394   66662 main.go:141] libmachine: (embed-certs-778109) DBG | domain embed-certs-778109 has defined MAC address 52:54:00:b4:76:d0 in network mk-embed-certs-778109
	I0505 22:27:13.114766   66662 main.go:141] libmachine: (embed-certs-778109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:76:d0", ip: ""} in network mk-embed-certs-778109: {Iface:virbr4 ExpiryTime:2024-05-05 23:27:01 +0000 UTC Type:0 Mac:52:54:00:b4:76:d0 Iaid: IPaddr:192.168.72.90 Prefix:24 Hostname:embed-certs-778109 Clientid:01:52:54:00:b4:76:d0}
	I0505 22:27:13.114809   66662 main.go:141] libmachine: (embed-certs-778109) DBG | domain embed-certs-778109 has defined IP address 192.168.72.90 and MAC address 52:54:00:b4:76:d0 in network mk-embed-certs-778109
	I0505 22:27:13.115005   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetSSHPort
	I0505 22:27:13.115217   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetSSHKeyPath
	I0505 22:27:13.115393   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetSSHKeyPath
	I0505 22:27:13.115547   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetSSHUsername
	I0505 22:27:13.115811   66662 main.go:141] libmachine: Using SSH client type: native
	I0505 22:27:13.115999   66662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.90 22 <nil> <nil>}
	I0505 22:27:13.116011   66662 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0505 22:27:13.233320   66662 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714948033.219526007
	
	I0505 22:27:13.233349   66662 fix.go:216] guest clock: 1714948033.219526007
	I0505 22:27:13.233359   66662 fix.go:229] Guest: 2024-05-05 22:27:13.219526007 +0000 UTC Remote: 2024-05-05 22:27:13.111277038 +0000 UTC m=+113.299401567 (delta=108.248969ms)
	I0505 22:27:13.233380   66662 fix.go:200] guest clock delta is within tolerance: 108.248969ms
	I0505 22:27:13.233384   66662 start.go:83] releasing machines lock for "embed-certs-778109", held for 24.916183126s
	I0505 22:27:13.233411   66662 main.go:141] libmachine: (embed-certs-778109) Calling .DriverName
	I0505 22:27:13.233728   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetIP
	I0505 22:27:13.236668   66662 main.go:141] libmachine: (embed-certs-778109) DBG | domain embed-certs-778109 has defined MAC address 52:54:00:b4:76:d0 in network mk-embed-certs-778109
	I0505 22:27:13.237029   66662 main.go:141] libmachine: (embed-certs-778109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:76:d0", ip: ""} in network mk-embed-certs-778109: {Iface:virbr4 ExpiryTime:2024-05-05 23:27:01 +0000 UTC Type:0 Mac:52:54:00:b4:76:d0 Iaid: IPaddr:192.168.72.90 Prefix:24 Hostname:embed-certs-778109 Clientid:01:52:54:00:b4:76:d0}
	I0505 22:27:13.237056   66662 main.go:141] libmachine: (embed-certs-778109) DBG | domain embed-certs-778109 has defined IP address 192.168.72.90 and MAC address 52:54:00:b4:76:d0 in network mk-embed-certs-778109
	I0505 22:27:13.237238   66662 main.go:141] libmachine: (embed-certs-778109) Calling .DriverName
	I0505 22:27:13.237896   66662 main.go:141] libmachine: (embed-certs-778109) Calling .DriverName
	I0505 22:27:13.238090   66662 main.go:141] libmachine: (embed-certs-778109) Calling .DriverName
	I0505 22:27:13.238216   66662 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0505 22:27:13.238262   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetSSHHostname
	I0505 22:27:13.238324   66662 ssh_runner.go:195] Run: cat /version.json
	I0505 22:27:13.238356   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetSSHHostname
	I0505 22:27:13.240934   66662 main.go:141] libmachine: (embed-certs-778109) DBG | domain embed-certs-778109 has defined MAC address 52:54:00:b4:76:d0 in network mk-embed-certs-778109
	I0505 22:27:13.241177   66662 main.go:141] libmachine: (embed-certs-778109) DBG | domain embed-certs-778109 has defined MAC address 52:54:00:b4:76:d0 in network mk-embed-certs-778109
	I0505 22:27:13.241354   66662 main.go:141] libmachine: (embed-certs-778109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:76:d0", ip: ""} in network mk-embed-certs-778109: {Iface:virbr4 ExpiryTime:2024-05-05 23:27:01 +0000 UTC Type:0 Mac:52:54:00:b4:76:d0 Iaid: IPaddr:192.168.72.90 Prefix:24 Hostname:embed-certs-778109 Clientid:01:52:54:00:b4:76:d0}
	I0505 22:27:13.241384   66662 main.go:141] libmachine: (embed-certs-778109) DBG | domain embed-certs-778109 has defined IP address 192.168.72.90 and MAC address 52:54:00:b4:76:d0 in network mk-embed-certs-778109
	I0505 22:27:13.241484   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetSSHPort
	I0505 22:27:13.241601   66662 main.go:141] libmachine: (embed-certs-778109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:76:d0", ip: ""} in network mk-embed-certs-778109: {Iface:virbr4 ExpiryTime:2024-05-05 23:27:01 +0000 UTC Type:0 Mac:52:54:00:b4:76:d0 Iaid: IPaddr:192.168.72.90 Prefix:24 Hostname:embed-certs-778109 Clientid:01:52:54:00:b4:76:d0}
	I0505 22:27:13.241618   66662 main.go:141] libmachine: (embed-certs-778109) DBG | domain embed-certs-778109 has defined IP address 192.168.72.90 and MAC address 52:54:00:b4:76:d0 in network mk-embed-certs-778109
	I0505 22:27:13.241647   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetSSHKeyPath
	I0505 22:27:13.241799   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetSSHPort
	I0505 22:27:13.242069   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetSSHKeyPath
	I0505 22:27:13.242070   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetSSHUsername
	I0505 22:27:13.242225   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetSSHUsername
	I0505 22:27:13.242242   66662 sshutil.go:53] new ssh client: &{IP:192.168.72.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/embed-certs-778109/id_rsa Username:docker}
	I0505 22:27:13.242391   66662 sshutil.go:53] new ssh client: &{IP:192.168.72.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/embed-certs-778109/id_rsa Username:docker}
	I0505 22:27:13.352748   66662 ssh_runner.go:195] Run: systemctl --version
	I0505 22:27:13.359616   66662 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0505 22:27:13.514515   66662 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0505 22:27:13.521554   66662 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0505 22:27:13.521625   66662 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0505 22:27:13.539349   66662 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0505 22:27:13.539378   66662 start.go:494] detecting cgroup driver to use...
	I0505 22:27:13.539444   66662 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0505 22:27:13.561278   66662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0505 22:27:13.583907   66662 docker.go:217] disabling cri-docker service (if available) ...
	I0505 22:27:13.583976   66662 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0505 22:27:13.602729   66662 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0505 22:27:13.621975   66662 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0505 22:27:13.793302   66662 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0505 22:27:13.960249   66662 docker.go:233] disabling docker service ...
	I0505 22:27:13.960319   66662 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0505 22:27:13.982223   66662 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0505 22:27:14.003069   66662 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0505 22:27:14.182247   66662 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0505 22:27:14.370457   66662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0505 22:27:14.387727   66662 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0505 22:27:14.409663   66662 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0505 22:27:14.409713   66662 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 22:27:14.424111   66662 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0505 22:27:14.424199   66662 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 22:27:14.441814   66662 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 22:27:14.456427   66662 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 22:27:14.470835   66662 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0505 22:27:14.485679   66662 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 22:27:14.499759   66662 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 22:27:14.523572   66662 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 22:27:14.537398   66662 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0505 22:27:14.550524   66662 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0505 22:27:14.550605   66662 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0505 22:27:14.566922   66662 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0505 22:27:14.584290   66662 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 22:27:14.742091   66662 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0505 22:27:14.902812   66662 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0505 22:27:14.902896   66662 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0505 22:27:14.908944   66662 start.go:562] Will wait 60s for crictl version
	I0505 22:27:14.909010   66662 ssh_runner.go:195] Run: which crictl
	I0505 22:27:14.913860   66662 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0505 22:27:14.958296   66662 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0505 22:27:14.958394   66662 ssh_runner.go:195] Run: crio --version
	I0505 22:27:14.997034   66662 ssh_runner.go:195] Run: crio --version
	I0505 22:27:15.035680   66662 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0505 22:27:10.272634   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:10.771965   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:11.272759   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:11.772722   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:12.272750   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:12.772708   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:13.272710   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:13.772232   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:14.272658   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:14.772589   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:13.257630   65347 main.go:141] libmachine: (no-preload-112135) Calling .Start
	I0505 22:27:13.257802   65347 main.go:141] libmachine: (no-preload-112135) Ensuring networks are active...
	I0505 22:27:13.258538   65347 main.go:141] libmachine: (no-preload-112135) Ensuring network default is active
	I0505 22:27:13.258846   65347 main.go:141] libmachine: (no-preload-112135) Ensuring network mk-no-preload-112135 is active
	I0505 22:27:13.259304   65347 main.go:141] libmachine: (no-preload-112135) Getting domain xml...
	I0505 22:27:13.260101   65347 main.go:141] libmachine: (no-preload-112135) Creating domain...
	I0505 22:27:14.612502   65347 main.go:141] libmachine: (no-preload-112135) Waiting to get IP...
	I0505 22:27:14.613568   65347 main.go:141] libmachine: (no-preload-112135) DBG | domain no-preload-112135 has defined MAC address 52:54:00:fe:c4:2d in network mk-no-preload-112135
	I0505 22:27:14.614069   65347 main.go:141] libmachine: (no-preload-112135) DBG | unable to find current IP address of domain no-preload-112135 in network mk-no-preload-112135
	I0505 22:27:14.614176   65347 main.go:141] libmachine: (no-preload-112135) DBG | I0505 22:27:14.614052   67271 retry.go:31] will retry after 191.06264ms: waiting for machine to come up
	I0505 22:27:14.806685   65347 main.go:141] libmachine: (no-preload-112135) DBG | domain no-preload-112135 has defined MAC address 52:54:00:fe:c4:2d in network mk-no-preload-112135
	I0505 22:27:14.807288   65347 main.go:141] libmachine: (no-preload-112135) DBG | unable to find current IP address of domain no-preload-112135 in network mk-no-preload-112135
	I0505 22:27:14.807313   65347 main.go:141] libmachine: (no-preload-112135) DBG | I0505 22:27:14.807242   67271 retry.go:31] will retry after 280.20649ms: waiting for machine to come up
	I0505 22:27:15.088899   65347 main.go:141] libmachine: (no-preload-112135) DBG | domain no-preload-112135 has defined MAC address 52:54:00:fe:c4:2d in network mk-no-preload-112135
	I0505 22:27:15.089522   65347 main.go:141] libmachine: (no-preload-112135) DBG | unable to find current IP address of domain no-preload-112135 in network mk-no-preload-112135
	I0505 22:27:15.089546   65347 main.go:141] libmachine: (no-preload-112135) DBG | I0505 22:27:15.089461   67271 retry.go:31] will retry after 403.010405ms: waiting for machine to come up
	I0505 22:27:15.494392   65347 main.go:141] libmachine: (no-preload-112135) DBG | domain no-preload-112135 has defined MAC address 52:54:00:fe:c4:2d in network mk-no-preload-112135
	I0505 22:27:15.495297   65347 main.go:141] libmachine: (no-preload-112135) DBG | unable to find current IP address of domain no-preload-112135 in network mk-no-preload-112135
	I0505 22:27:15.495317   65347 main.go:141] libmachine: (no-preload-112135) DBG | I0505 22:27:15.495249   67271 retry.go:31] will retry after 452.944845ms: waiting for machine to come up
	I0505 22:27:16.372375   61991 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (6.698593426s)
	I0505 22:27:16.372444   61991 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 22:27:16.395032   61991 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0505 22:27:16.412183   61991 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0505 22:27:16.412209   61991 kubeadm.go:156] found existing configuration files:
	
	I0505 22:27:16.412265   61991 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0505 22:27:16.429525   61991 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0505 22:27:16.429587   61991 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0505 22:27:16.444271   61991 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0505 22:27:16.459363   61991 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0505 22:27:16.459427   61991 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0505 22:27:16.473972   61991 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0505 22:27:16.492272   61991 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0505 22:27:16.492372   61991 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0505 22:27:16.511773   61991 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0505 22:27:16.529104   61991 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0505 22:27:16.529176   61991 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0505 22:27:16.546931   61991 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0505 22:27:16.634392   61991 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0505 22:27:16.634626   61991 kubeadm.go:309] [preflight] Running pre-flight checks
	I0505 22:27:16.831025   61991 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0505 22:27:16.831178   61991 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0505 22:27:16.831318   61991 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0505 22:27:17.136012   61991 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0505 22:27:17.138096   61991 out.go:204]   - Generating certificates and keys ...
	I0505 22:27:17.138217   61991 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0505 22:27:17.138309   61991 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0505 22:27:17.138414   61991 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0505 22:27:17.139886   61991 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0505 22:27:17.139992   61991 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0505 22:27:17.140088   61991 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0505 22:27:17.140206   61991 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0505 22:27:17.141781   61991 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0505 22:27:17.141891   61991 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0505 22:27:17.141999   61991 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0505 22:27:17.142052   61991 kubeadm.go:309] [certs] Using the existing "sa" key
	I0505 22:27:17.142125   61991 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0505 22:27:17.470046   61991 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0505 22:27:17.655360   61991 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0505 22:27:18.110360   61991 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0505 22:27:18.242523   61991 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0505 22:27:18.735991   61991 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0505 22:27:18.736654   61991 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0505 22:27:18.742555   61991 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0505 22:27:15.037250   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetIP
	I0505 22:27:15.040830   66662 main.go:141] libmachine: (embed-certs-778109) DBG | domain embed-certs-778109 has defined MAC address 52:54:00:b4:76:d0 in network mk-embed-certs-778109
	I0505 22:27:15.041307   66662 main.go:141] libmachine: (embed-certs-778109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:76:d0", ip: ""} in network mk-embed-certs-778109: {Iface:virbr4 ExpiryTime:2024-05-05 23:27:01 +0000 UTC Type:0 Mac:52:54:00:b4:76:d0 Iaid: IPaddr:192.168.72.90 Prefix:24 Hostname:embed-certs-778109 Clientid:01:52:54:00:b4:76:d0}
	I0505 22:27:15.041342   66662 main.go:141] libmachine: (embed-certs-778109) DBG | domain embed-certs-778109 has defined IP address 192.168.72.90 and MAC address 52:54:00:b4:76:d0 in network mk-embed-certs-778109
	I0505 22:27:15.041606   66662 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0505 22:27:15.047293   66662 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0505 22:27:15.064397   66662 kubeadm.go:877] updating cluster {Name:embed-certs-778109 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:embed-certs-778109 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.90 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0505 22:27:15.064500   66662 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0505 22:27:15.064539   66662 ssh_runner.go:195] Run: sudo crictl images --output json
	I0505 22:27:15.115153   66662 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0505 22:27:15.115240   66662 ssh_runner.go:195] Run: which lz4
	I0505 22:27:15.120518   66662 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0505 22:27:15.125942   66662 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0505 22:27:15.125993   66662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0505 22:27:17.011025   66662 crio.go:462] duration metric: took 1.890537141s to copy over tarball
	I0505 22:27:17.011103   66662 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0505 22:27:19.757285   66662 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.746129063s)
	I0505 22:27:19.757329   66662 crio.go:469] duration metric: took 2.746273069s to extract the tarball
	I0505 22:27:19.757339   66662 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0505 22:27:19.799833   66662 ssh_runner.go:195] Run: sudo crictl images --output json
	I0505 22:27:19.848157   66662 crio.go:514] all images are preloaded for cri-o runtime.
	I0505 22:27:19.848181   66662 cache_images.go:84] Images are preloaded, skipping loading
	I0505 22:27:19.848191   66662 kubeadm.go:928] updating node { 192.168.72.90 8443 v1.30.0 crio true true} ...
	I0505 22:27:19.848326   66662 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-778109 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.90
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:embed-certs-778109 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0505 22:27:19.848412   66662 ssh_runner.go:195] Run: crio config
	I0505 22:27:15.271929   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:15.772164   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:16.272706   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:16.772416   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:17.272713   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:17.772443   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:18.272749   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:18.772703   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:19.271896   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:19.771895   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:15.949945   65347 main.go:141] libmachine: (no-preload-112135) DBG | domain no-preload-112135 has defined MAC address 52:54:00:fe:c4:2d in network mk-no-preload-112135
	I0505 22:27:15.950500   65347 main.go:141] libmachine: (no-preload-112135) DBG | unable to find current IP address of domain no-preload-112135 in network mk-no-preload-112135
	I0505 22:27:15.950528   65347 main.go:141] libmachine: (no-preload-112135) DBG | I0505 22:27:15.950455   67271 retry.go:31] will retry after 497.902818ms: waiting for machine to come up
	I0505 22:27:16.450321   65347 main.go:141] libmachine: (no-preload-112135) DBG | domain no-preload-112135 has defined MAC address 52:54:00:fe:c4:2d in network mk-no-preload-112135
	I0505 22:27:16.450877   65347 main.go:141] libmachine: (no-preload-112135) DBG | unable to find current IP address of domain no-preload-112135 in network mk-no-preload-112135
	I0505 22:27:16.450905   65347 main.go:141] libmachine: (no-preload-112135) DBG | I0505 22:27:16.450834   67271 retry.go:31] will retry after 892.767539ms: waiting for machine to come up
	I0505 22:27:17.344948   65347 main.go:141] libmachine: (no-preload-112135) DBG | domain no-preload-112135 has defined MAC address 52:54:00:fe:c4:2d in network mk-no-preload-112135
	I0505 22:27:17.345524   65347 main.go:141] libmachine: (no-preload-112135) DBG | unable to find current IP address of domain no-preload-112135 in network mk-no-preload-112135
	I0505 22:27:17.345570   65347 main.go:141] libmachine: (no-preload-112135) DBG | I0505 22:27:17.345482   67271 retry.go:31] will retry after 907.562224ms: waiting for machine to come up
	I0505 22:27:18.254471   65347 main.go:141] libmachine: (no-preload-112135) DBG | domain no-preload-112135 has defined MAC address 52:54:00:fe:c4:2d in network mk-no-preload-112135
	I0505 22:27:18.254948   65347 main.go:141] libmachine: (no-preload-112135) DBG | unable to find current IP address of domain no-preload-112135 in network mk-no-preload-112135
	I0505 22:27:18.255006   65347 main.go:141] libmachine: (no-preload-112135) DBG | I0505 22:27:18.254911   67271 retry.go:31] will retry after 912.101023ms: waiting for machine to come up
	I0505 22:27:19.168539   65347 main.go:141] libmachine: (no-preload-112135) DBG | domain no-preload-112135 has defined MAC address 52:54:00:fe:c4:2d in network mk-no-preload-112135
	I0505 22:27:19.169119   65347 main.go:141] libmachine: (no-preload-112135) DBG | unable to find current IP address of domain no-preload-112135 in network mk-no-preload-112135
	I0505 22:27:19.169151   65347 main.go:141] libmachine: (no-preload-112135) DBG | I0505 22:27:19.169058   67271 retry.go:31] will retry after 1.451066446s: waiting for machine to come up
	I0505 22:27:20.621555   65347 main.go:141] libmachine: (no-preload-112135) DBG | domain no-preload-112135 has defined MAC address 52:54:00:fe:c4:2d in network mk-no-preload-112135
	I0505 22:27:20.622136   65347 main.go:141] libmachine: (no-preload-112135) DBG | unable to find current IP address of domain no-preload-112135 in network mk-no-preload-112135
	I0505 22:27:20.622165   65347 main.go:141] libmachine: (no-preload-112135) DBG | I0505 22:27:20.622084   67271 retry.go:31] will retry after 1.406367343s: waiting for machine to come up
	I0505 22:27:18.744416   61991 out.go:204]   - Booting up control plane ...
	I0505 22:27:18.744548   61991 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0505 22:27:18.744687   61991 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0505 22:27:18.744782   61991 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0505 22:27:18.771695   61991 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0505 22:27:18.772785   61991 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0505 22:27:18.772965   61991 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0505 22:27:18.956642   61991 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0505 22:27:18.956996   61991 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0505 22:27:19.959416   61991 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002425636s
	I0505 22:27:19.959559   61991 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0505 22:27:19.903383   66662 cni.go:84] Creating CNI manager for ""
	I0505 22:27:20.272267   66662 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0505 22:27:20.272303   66662 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0505 22:27:20.272342   66662 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.90 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-778109 NodeName:embed-certs-778109 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.90"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.90 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0505 22:27:20.272527   66662 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.90
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-778109"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.90
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.90"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0505 22:27:20.272609   66662 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0505 22:27:20.289402   66662 binaries.go:44] Found k8s binaries, skipping transfer
	I0505 22:27:20.289476   66662 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0505 22:27:20.306220   66662 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0505 22:27:20.330429   66662 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0505 22:27:20.353182   66662 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0505 22:27:20.376283   66662 ssh_runner.go:195] Run: grep 192.168.72.90	control-plane.minikube.internal$ /etc/hosts
	I0505 22:27:20.380996   66662 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.90	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0505 22:27:20.401771   66662 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 22:27:20.561726   66662 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0505 22:27:20.585621   66662 certs.go:68] Setting up /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/embed-certs-778109 for IP: 192.168.72.90
	I0505 22:27:20.585646   66662 certs.go:194] generating shared ca certs ...
	I0505 22:27:20.585661   66662 certs.go:226] acquiring lock for ca certs: {Name:mk9a8f97724855697074b08194c6247f8f9e5c84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 22:27:20.585859   66662 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key
	I0505 22:27:20.585921   66662 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key
	I0505 22:27:20.585942   66662 certs.go:256] generating profile certs ...
	I0505 22:27:20.586062   66662 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/embed-certs-778109/client.key
	I0505 22:27:20.586141   66662 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/embed-certs-778109/apiserver.key.1a80858d
	I0505 22:27:20.586201   66662 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/embed-certs-778109/proxy-client.key
	I0505 22:27:20.586383   66662 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798.pem (1338 bytes)
	W0505 22:27:20.586429   66662 certs.go:480] ignoring /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798_empty.pem, impossibly tiny 0 bytes
	I0505 22:27:20.586445   66662 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem (1675 bytes)
	I0505 22:27:20.586479   66662 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem (1078 bytes)
	I0505 22:27:20.586520   66662 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem (1123 bytes)
	I0505 22:27:20.586566   66662 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem (1675 bytes)
	I0505 22:27:20.586636   66662 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem (1708 bytes)
	I0505 22:27:20.587438   66662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0505 22:27:20.639386   66662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0505 22:27:20.673642   66662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0505 22:27:20.707335   66662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0505 22:27:20.736260   66662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/embed-certs-778109/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0505 22:27:20.766336   66662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/embed-certs-778109/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0505 22:27:20.796749   66662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/embed-certs-778109/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0505 22:27:20.825975   66662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/embed-certs-778109/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0505 22:27:20.855745   66662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0505 22:27:20.883230   66662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798.pem --> /usr/share/ca-certificates/18798.pem (1338 bytes)
	I0505 22:27:20.909678   66662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem --> /usr/share/ca-certificates/187982.pem (1708 bytes)
	I0505 22:27:20.936035   66662 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0505 22:27:20.955209   66662 ssh_runner.go:195] Run: openssl version
	I0505 22:27:20.962146   66662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0505 22:27:20.975041   66662 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0505 22:27:20.980289   66662 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  5 20:58 /usr/share/ca-certificates/minikubeCA.pem
	I0505 22:27:20.980347   66662 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0505 22:27:20.986771   66662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0505 22:27:20.999668   66662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18798.pem && ln -fs /usr/share/ca-certificates/18798.pem /etc/ssl/certs/18798.pem"
	I0505 22:27:21.012615   66662 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18798.pem
	I0505 22:27:21.017866   66662 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  5 21:11 /usr/share/ca-certificates/18798.pem
	I0505 22:27:21.017926   66662 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18798.pem
	I0505 22:27:21.024557   66662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18798.pem /etc/ssl/certs/51391683.0"
	I0505 22:27:21.038096   66662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/187982.pem && ln -fs /usr/share/ca-certificates/187982.pem /etc/ssl/certs/187982.pem"
	I0505 22:27:21.050984   66662 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/187982.pem
	I0505 22:27:21.056333   66662 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  5 21:11 /usr/share/ca-certificates/187982.pem
	I0505 22:27:21.056400   66662 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/187982.pem
	I0505 22:27:21.062968   66662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/187982.pem /etc/ssl/certs/3ec20f2e.0"
	I0505 22:27:21.075454   66662 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0505 22:27:21.080517   66662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0505 22:27:21.087016   66662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0505 22:27:21.093309   66662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0505 22:27:21.100006   66662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0505 22:27:21.106691   66662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0505 22:27:21.113310   66662 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0505 22:27:21.120057   66662 kubeadm.go:391] StartCluster: {Name:embed-certs-778109 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:embed-certs-778109 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.90 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 22:27:21.120175   66662 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0505 22:27:21.120226   66662 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0505 22:27:21.160091   66662 cri.go:89] found id: ""
	I0505 22:27:21.160171   66662 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0505 22:27:21.175349   66662 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0505 22:27:21.175380   66662 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0505 22:27:21.175386   66662 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0505 22:27:21.175471   66662 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0505 22:27:21.193359   66662 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0505 22:27:21.194899   66662 kubeconfig.go:125] found "embed-certs-778109" server: "https://192.168.72.90:8443"
	I0505 22:27:21.197984   66662 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0505 22:27:21.212079   66662 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.90
	I0505 22:27:21.212125   66662 kubeadm.go:1154] stopping kube-system containers ...
	I0505 22:27:21.212139   66662 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0505 22:27:21.212204   66662 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0505 22:27:21.265586   66662 cri.go:89] found id: ""
	I0505 22:27:21.265678   66662 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0505 22:27:21.289632   66662 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0505 22:27:21.304134   66662 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0505 22:27:21.304162   66662 kubeadm.go:156] found existing configuration files:
	
	I0505 22:27:21.304213   66662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0505 22:27:21.314559   66662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0505 22:27:21.314624   66662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0505 22:27:21.325605   66662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0505 22:27:21.336327   66662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0505 22:27:21.336393   66662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0505 22:27:21.347975   66662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0505 22:27:21.358778   66662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0505 22:27:21.358840   66662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0505 22:27:21.369240   66662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0505 22:27:21.379586   66662 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0505 22:27:21.379663   66662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0505 22:27:21.393519   66662 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0505 22:27:21.408747   66662 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0505 22:27:21.542053   66662 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0505 22:27:22.696879   66662 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.154786911s)
	I0505 22:27:22.696915   66662 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0505 22:27:22.947337   66662 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0505 22:27:23.048516   66662 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0505 22:27:23.179610   66662 api_server.go:52] waiting for apiserver process to appear ...
	I0505 22:27:23.179696   66662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:23.680718   66662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:24.180733   66662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:24.227530   66662 api_server.go:72] duration metric: took 1.047918907s to wait for apiserver process to appear ...
	I0505 22:27:24.227560   66662 api_server.go:88] waiting for apiserver healthz status ...
	I0505 22:27:24.227582   66662 api_server.go:253] Checking apiserver healthz at https://192.168.72.90:8443/healthz ...
	I0505 22:27:20.272332   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:20.772714   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:21.272310   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:21.772590   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:22.272431   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:22.771886   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:23.271914   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:23.772169   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:24.272695   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:24.771903   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:22.030498   65347 main.go:141] libmachine: (no-preload-112135) DBG | domain no-preload-112135 has defined MAC address 52:54:00:fe:c4:2d in network mk-no-preload-112135
	I0505 22:27:22.031049   65347 main.go:141] libmachine: (no-preload-112135) DBG | unable to find current IP address of domain no-preload-112135 in network mk-no-preload-112135
	I0505 22:27:22.031079   65347 main.go:141] libmachine: (no-preload-112135) DBG | I0505 22:27:22.031003   67271 retry.go:31] will retry after 2.265225837s: waiting for machine to come up
	I0505 22:27:24.298375   65347 main.go:141] libmachine: (no-preload-112135) DBG | domain no-preload-112135 has defined MAC address 52:54:00:fe:c4:2d in network mk-no-preload-112135
	I0505 22:27:24.298995   65347 main.go:141] libmachine: (no-preload-112135) DBG | unable to find current IP address of domain no-preload-112135 in network mk-no-preload-112135
	I0505 22:27:24.299028   65347 main.go:141] libmachine: (no-preload-112135) DBG | I0505 22:27:24.298958   67271 retry.go:31] will retry after 3.062229884s: waiting for machine to come up
	I0505 22:27:26.763867   66662 api_server.go:279] https://192.168.72.90:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0505 22:27:26.763910   66662 api_server.go:103] status: https://192.168.72.90:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0505 22:27:26.763927   66662 api_server.go:253] Checking apiserver healthz at https://192.168.72.90:8443/healthz ...
	I0505 22:27:26.819880   66662 api_server.go:279] https://192.168.72.90:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0505 22:27:26.819911   66662 api_server.go:103] status: https://192.168.72.90:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0505 22:27:27.228443   66662 api_server.go:253] Checking apiserver healthz at https://192.168.72.90:8443/healthz ...
	I0505 22:27:27.234324   66662 api_server.go:279] https://192.168.72.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0505 22:27:27.234352   66662 api_server.go:103] status: https://192.168.72.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0505 22:27:27.728695   66662 api_server.go:253] Checking apiserver healthz at https://192.168.72.90:8443/healthz ...
	I0505 22:27:27.733687   66662 api_server.go:279] https://192.168.72.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0505 22:27:27.733720   66662 api_server.go:103] status: https://192.168.72.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0505 22:27:28.227827   66662 api_server.go:253] Checking apiserver healthz at https://192.168.72.90:8443/healthz ...
	I0505 22:27:28.240549   66662 api_server.go:279] https://192.168.72.90:8443/healthz returned 200:
	ok
	I0505 22:27:28.247261   66662 api_server.go:141] control plane version: v1.30.0
	I0505 22:27:28.247293   66662 api_server.go:131] duration metric: took 4.019726675s to wait for apiserver health ...
	I0505 22:27:28.247303   66662 cni.go:84] Creating CNI manager for ""
	I0505 22:27:28.247310   66662 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0505 22:27:28.248940   66662 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0505 22:27:28.250913   66662 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0505 22:27:28.265402   66662 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0505 22:27:28.296706   66662 system_pods.go:43] waiting for kube-system pods to appear ...
	I0505 22:27:28.314245   66662 system_pods.go:59] 8 kube-system pods found
	I0505 22:27:28.314281   66662 system_pods.go:61] "coredns-7db6d8ff4d-fr99d" [fadbb0a4-d06e-4acf-8914-ccd6cf8b192b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0505 22:27:28.314292   66662 system_pods.go:61] "etcd-embed-certs-778109" [328f2155-7ec8-4786-b422-c8064b462248] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0505 22:27:28.314304   66662 system_pods.go:61] "kube-apiserver-embed-certs-778109" [6a3efc6f-8687-4ef6-ba69-4c7347e402f2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0505 22:27:28.314314   66662 system_pods.go:61] "kube-controller-manager-embed-certs-778109" [f4d89238-3906-401b-a41a-84dd2c7d364c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0505 22:27:28.314325   66662 system_pods.go:61] "kube-proxy-8l2nn" [3346e555-f9ba-4901-96e5-d6b2130b5f77] Running
	I0505 22:27:28.314335   66662 system_pods.go:61] "kube-scheduler-embed-certs-778109" [cad183e6-d593-49d4-8f0d-5604af63bec9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0505 22:27:28.314352   66662 system_pods.go:61] "metrics-server-569cc877fc-qwd2z" [8e04afb6-10be-4d93-86a0-7366d7f29701] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0505 22:27:28.314362   66662 system_pods.go:61] "storage-provisioner" [fa1cfe0c-8a8d-4bf5-9676-fbc381fbd37e] Running
	I0505 22:27:28.314373   66662 system_pods.go:74] duration metric: took 17.646868ms to wait for pod list to return data ...
	I0505 22:27:28.314385   66662 node_conditions.go:102] verifying NodePressure condition ...
	I0505 22:27:28.318539   66662 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0505 22:27:28.318570   66662 node_conditions.go:123] node cpu capacity is 2
	I0505 22:27:28.318581   66662 node_conditions.go:105] duration metric: took 4.188756ms to run NodePressure ...
	I0505 22:27:28.318597   66662 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0505 22:27:28.593816   66662 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0505 22:27:28.597787   66662 kubeadm.go:733] kubelet initialised
	I0505 22:27:28.597807   66662 kubeadm.go:734] duration metric: took 3.96391ms waiting for restarted kubelet to initialise ...
	I0505 22:27:28.597815   66662 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0505 22:27:28.602833   66662 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-fr99d" in "kube-system" namespace to be "Ready" ...
	I0505 22:27:28.607365   66662 pod_ready.go:97] node "embed-certs-778109" hosting pod "coredns-7db6d8ff4d-fr99d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-778109" has status "Ready":"False"
	I0505 22:27:28.607386   66662 pod_ready.go:81] duration metric: took 4.532898ms for pod "coredns-7db6d8ff4d-fr99d" in "kube-system" namespace to be "Ready" ...
	E0505 22:27:28.607394   66662 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-778109" hosting pod "coredns-7db6d8ff4d-fr99d" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-778109" has status "Ready":"False"
	I0505 22:27:28.607401   66662 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-778109" in "kube-system" namespace to be "Ready" ...
	I0505 22:27:28.611616   66662 pod_ready.go:97] node "embed-certs-778109" hosting pod "etcd-embed-certs-778109" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-778109" has status "Ready":"False"
	I0505 22:27:28.611635   66662 pod_ready.go:81] duration metric: took 4.227901ms for pod "etcd-embed-certs-778109" in "kube-system" namespace to be "Ready" ...
	E0505 22:27:28.611643   66662 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-778109" hosting pod "etcd-embed-certs-778109" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-778109" has status "Ready":"False"
	I0505 22:27:28.611648   66662 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-778109" in "kube-system" namespace to be "Ready" ...
	I0505 22:27:28.615504   66662 pod_ready.go:97] node "embed-certs-778109" hosting pod "kube-apiserver-embed-certs-778109" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-778109" has status "Ready":"False"
	I0505 22:27:28.615524   66662 pod_ready.go:81] duration metric: took 3.869453ms for pod "kube-apiserver-embed-certs-778109" in "kube-system" namespace to be "Ready" ...
	E0505 22:27:28.615531   66662 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-778109" hosting pod "kube-apiserver-embed-certs-778109" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-778109" has status "Ready":"False"
	I0505 22:27:28.615537   66662 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-778109" in "kube-system" namespace to be "Ready" ...
	I0505 22:27:28.701948   66662 pod_ready.go:97] node "embed-certs-778109" hosting pod "kube-controller-manager-embed-certs-778109" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-778109" has status "Ready":"False"
	I0505 22:27:28.702005   66662 pod_ready.go:81] duration metric: took 86.460028ms for pod "kube-controller-manager-embed-certs-778109" in "kube-system" namespace to be "Ready" ...
	E0505 22:27:28.702015   66662 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-778109" hosting pod "kube-controller-manager-embed-certs-778109" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-778109" has status "Ready":"False"
	I0505 22:27:28.702021   66662 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8l2nn" in "kube-system" namespace to be "Ready" ...
	I0505 22:27:29.100321   66662 pod_ready.go:97] node "embed-certs-778109" hosting pod "kube-proxy-8l2nn" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-778109" has status "Ready":"False"
	I0505 22:27:29.100350   66662 pod_ready.go:81] duration metric: took 398.321062ms for pod "kube-proxy-8l2nn" in "kube-system" namespace to be "Ready" ...
	E0505 22:27:29.100358   66662 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-778109" hosting pod "kube-proxy-8l2nn" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-778109" has status "Ready":"False"
	I0505 22:27:29.100364   66662 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-778109" in "kube-system" namespace to be "Ready" ...
	I0505 22:27:29.501274   66662 pod_ready.go:97] node "embed-certs-778109" hosting pod "kube-scheduler-embed-certs-778109" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-778109" has status "Ready":"False"
	I0505 22:27:29.501306   66662 pod_ready.go:81] duration metric: took 400.934516ms for pod "kube-scheduler-embed-certs-778109" in "kube-system" namespace to be "Ready" ...
	E0505 22:27:29.501320   66662 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-778109" hosting pod "kube-scheduler-embed-certs-778109" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-778109" has status "Ready":"False"
	I0505 22:27:29.501330   66662 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace to be "Ready" ...
	I0505 22:27:29.900619   66662 pod_ready.go:97] node "embed-certs-778109" hosting pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-778109" has status "Ready":"False"
	I0505 22:27:29.900649   66662 pod_ready.go:81] duration metric: took 399.309266ms for pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace to be "Ready" ...
	E0505 22:27:29.900658   66662 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-778109" hosting pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-778109" has status "Ready":"False"
	I0505 22:27:29.900665   66662 pod_ready.go:38] duration metric: took 1.302842223s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0505 22:27:29.900691   66662 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0505 22:27:29.913982   66662 ops.go:34] apiserver oom_adj: -16
	I0505 22:27:29.914007   66662 kubeadm.go:591] duration metric: took 8.73861443s to restartPrimaryControlPlane
	I0505 22:27:29.914017   66662 kubeadm.go:393] duration metric: took 8.793969126s to StartCluster
	I0505 22:27:29.914035   66662 settings.go:142] acquiring lock: {Name:mkbe19b7965e4b0b9928cd2b7b56f51dec95b157 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 22:27:29.914205   66662 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18602-11466/kubeconfig
	I0505 22:27:29.916604   66662 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/kubeconfig: {Name:mk083e789ff55e795c8f0eb5d298a0e27ad9cdde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 22:27:29.916940   66662 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.90 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0505 22:27:29.918660   66662 out.go:177] * Verifying Kubernetes components...
	I0505 22:27:29.917013   66662 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0505 22:27:29.917108   66662 config.go:182] Loaded profile config "embed-certs-778109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 22:27:29.920044   66662 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-778109"
	I0505 22:27:29.920067   66662 addons.go:69] Setting metrics-server=true in profile "embed-certs-778109"
	I0505 22:27:29.920076   66662 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 22:27:29.920080   66662 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-778109"
	I0505 22:27:29.920082   66662 addons.go:69] Setting default-storageclass=true in profile "embed-certs-778109"
	I0505 22:27:29.920134   66662 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-778109"
	W0505 22:27:29.920091   66662 addons.go:243] addon storage-provisioner should already be in state true
	I0505 22:27:29.920244   66662 host.go:66] Checking if "embed-certs-778109" exists ...
	I0505 22:27:29.920093   66662 addons.go:234] Setting addon metrics-server=true in "embed-certs-778109"
	W0505 22:27:29.920325   66662 addons.go:243] addon metrics-server should already be in state true
	I0505 22:27:29.920350   66662 host.go:66] Checking if "embed-certs-778109" exists ...
	I0505 22:27:29.920550   66662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 22:27:29.920595   66662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 22:27:29.920604   66662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 22:27:29.920616   66662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 22:27:29.920701   66662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 22:27:29.920736   66662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 22:27:29.935030   66662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36591
	I0505 22:27:29.935204   66662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40065
	I0505 22:27:29.935397   66662 main.go:141] libmachine: () Calling .GetVersion
	I0505 22:27:29.935653   66662 main.go:141] libmachine: () Calling .GetVersion
	I0505 22:27:29.935885   66662 main.go:141] libmachine: Using API Version  1
	I0505 22:27:29.935911   66662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 22:27:29.936137   66662 main.go:141] libmachine: Using API Version  1
	I0505 22:27:29.936162   66662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 22:27:29.936230   66662 main.go:141] libmachine: () Calling .GetMachineName
	I0505 22:27:29.936479   66662 main.go:141] libmachine: () Calling .GetMachineName
	I0505 22:27:29.936623   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetState
	I0505 22:27:29.936796   66662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 22:27:29.936831   66662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 22:27:29.939977   66662 addons.go:234] Setting addon default-storageclass=true in "embed-certs-778109"
	W0505 22:27:29.939999   66662 addons.go:243] addon default-storageclass should already be in state true
	I0505 22:27:29.940028   66662 host.go:66] Checking if "embed-certs-778109" exists ...
	I0505 22:27:29.940194   66662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45609
	I0505 22:27:29.940282   66662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 22:27:29.940311   66662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 22:27:29.940599   66662 main.go:141] libmachine: () Calling .GetVersion
	I0505 22:27:29.941044   66662 main.go:141] libmachine: Using API Version  1
	I0505 22:27:29.941064   66662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 22:27:29.941441   66662 main.go:141] libmachine: () Calling .GetMachineName
	I0505 22:27:29.941953   66662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 22:27:29.941984   66662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 22:27:29.951847   66662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33357
	I0505 22:27:29.952297   66662 main.go:141] libmachine: () Calling .GetVersion
	I0505 22:27:29.952770   66662 main.go:141] libmachine: Using API Version  1
	I0505 22:27:29.952789   66662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 22:27:29.953087   66662 main.go:141] libmachine: () Calling .GetMachineName
	I0505 22:27:29.953267   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetState
	I0505 22:27:29.954721   66662 main.go:141] libmachine: (embed-certs-778109) Calling .DriverName
	I0505 22:27:29.957066   66662 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0505 22:27:29.958705   66662 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0505 22:27:29.958724   66662 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0505 22:27:29.958742   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetSSHHostname
	I0505 22:27:29.959378   66662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39929
	I0505 22:27:29.959801   66662 main.go:141] libmachine: () Calling .GetVersion
	I0505 22:27:29.960288   66662 main.go:141] libmachine: Using API Version  1
	I0505 22:27:29.960308   66662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 22:27:29.960697   66662 main.go:141] libmachine: () Calling .GetMachineName
	I0505 22:27:29.961320   66662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 22:27:29.961361   66662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 22:27:29.961565   66662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42015
	I0505 22:27:29.961885   66662 main.go:141] libmachine: () Calling .GetVersion
	I0505 22:27:29.962051   66662 main.go:141] libmachine: (embed-certs-778109) DBG | domain embed-certs-778109 has defined MAC address 52:54:00:b4:76:d0 in network mk-embed-certs-778109
	I0505 22:27:29.962263   66662 main.go:141] libmachine: Using API Version  1
	I0505 22:27:29.962277   66662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 22:27:29.962454   66662 main.go:141] libmachine: (embed-certs-778109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:76:d0", ip: ""} in network mk-embed-certs-778109: {Iface:virbr4 ExpiryTime:2024-05-05 23:27:01 +0000 UTC Type:0 Mac:52:54:00:b4:76:d0 Iaid: IPaddr:192.168.72.90 Prefix:24 Hostname:embed-certs-778109 Clientid:01:52:54:00:b4:76:d0}
	I0505 22:27:29.962477   66662 main.go:141] libmachine: (embed-certs-778109) DBG | domain embed-certs-778109 has defined IP address 192.168.72.90 and MAC address 52:54:00:b4:76:d0 in network mk-embed-certs-778109
	I0505 22:27:29.962568   66662 main.go:141] libmachine: () Calling .GetMachineName
	I0505 22:27:29.962732   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetState
	I0505 22:27:29.962773   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetSSHPort
	I0505 22:27:29.962916   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetSSHKeyPath
	I0505 22:27:29.963070   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetSSHUsername
	I0505 22:27:29.963174   66662 sshutil.go:53] new ssh client: &{IP:192.168.72.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/embed-certs-778109/id_rsa Username:docker}
	I0505 22:27:29.964427   66662 main.go:141] libmachine: (embed-certs-778109) Calling .DriverName
	I0505 22:27:29.966395   66662 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0505 22:27:25.272732   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:25.771914   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:26.272774   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:26.772201   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:27.272617   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:27.772706   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:28.272707   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:28.771908   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:29.272501   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:29.772595   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:27.365096   65347 main.go:141] libmachine: (no-preload-112135) DBG | domain no-preload-112135 has defined MAC address 52:54:00:fe:c4:2d in network mk-no-preload-112135
	I0505 22:27:27.365535   65347 main.go:141] libmachine: (no-preload-112135) DBG | unable to find current IP address of domain no-preload-112135 in network mk-no-preload-112135
	I0505 22:27:27.365579   65347 main.go:141] libmachine: (no-preload-112135) DBG | I0505 22:27:27.365487   67271 retry.go:31] will retry after 3.222029838s: waiting for machine to come up
	I0505 22:27:30.588972   65347 main.go:141] libmachine: (no-preload-112135) DBG | domain no-preload-112135 has defined MAC address 52:54:00:fe:c4:2d in network mk-no-preload-112135
	I0505 22:27:30.589532   65347 main.go:141] libmachine: (no-preload-112135) DBG | unable to find current IP address of domain no-preload-112135 in network mk-no-preload-112135
	I0505 22:27:30.589566   65347 main.go:141] libmachine: (no-preload-112135) DBG | I0505 22:27:30.589480   67271 retry.go:31] will retry after 4.272312722s: waiting for machine to come up
	I0505 22:27:29.967932   66662 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0505 22:27:29.967943   66662 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0505 22:27:29.967957   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetSSHHostname
	I0505 22:27:29.971107   66662 main.go:141] libmachine: (embed-certs-778109) DBG | domain embed-certs-778109 has defined MAC address 52:54:00:b4:76:d0 in network mk-embed-certs-778109
	I0505 22:27:29.971616   66662 main.go:141] libmachine: (embed-certs-778109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:76:d0", ip: ""} in network mk-embed-certs-778109: {Iface:virbr4 ExpiryTime:2024-05-05 23:27:01 +0000 UTC Type:0 Mac:52:54:00:b4:76:d0 Iaid: IPaddr:192.168.72.90 Prefix:24 Hostname:embed-certs-778109 Clientid:01:52:54:00:b4:76:d0}
	I0505 22:27:29.971631   66662 main.go:141] libmachine: (embed-certs-778109) DBG | domain embed-certs-778109 has defined IP address 192.168.72.90 and MAC address 52:54:00:b4:76:d0 in network mk-embed-certs-778109
	I0505 22:27:29.971832   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetSSHPort
	I0505 22:27:29.971983   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetSSHKeyPath
	I0505 22:27:29.972117   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetSSHUsername
	I0505 22:27:29.972207   66662 sshutil.go:53] new ssh client: &{IP:192.168.72.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/embed-certs-778109/id_rsa Username:docker}
	I0505 22:27:29.977456   66662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40675
	I0505 22:27:29.977734   66662 main.go:141] libmachine: () Calling .GetVersion
	I0505 22:27:29.978105   66662 main.go:141] libmachine: Using API Version  1
	I0505 22:27:29.978121   66662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 22:27:29.978457   66662 main.go:141] libmachine: () Calling .GetMachineName
	I0505 22:27:29.978584   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetState
	I0505 22:27:29.979809   66662 main.go:141] libmachine: (embed-certs-778109) Calling .DriverName
	I0505 22:27:29.980050   66662 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0505 22:27:29.980061   66662 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0505 22:27:29.980076   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetSSHHostname
	I0505 22:27:29.982773   66662 main.go:141] libmachine: (embed-certs-778109) DBG | domain embed-certs-778109 has defined MAC address 52:54:00:b4:76:d0 in network mk-embed-certs-778109
	I0505 22:27:29.983204   66662 main.go:141] libmachine: (embed-certs-778109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:76:d0", ip: ""} in network mk-embed-certs-778109: {Iface:virbr4 ExpiryTime:2024-05-05 23:27:01 +0000 UTC Type:0 Mac:52:54:00:b4:76:d0 Iaid: IPaddr:192.168.72.90 Prefix:24 Hostname:embed-certs-778109 Clientid:01:52:54:00:b4:76:d0}
	I0505 22:27:29.983219   66662 main.go:141] libmachine: (embed-certs-778109) DBG | domain embed-certs-778109 has defined IP address 192.168.72.90 and MAC address 52:54:00:b4:76:d0 in network mk-embed-certs-778109
	I0505 22:27:29.983382   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetSSHPort
	I0505 22:27:29.983557   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetSSHKeyPath
	I0505 22:27:29.983704   66662 main.go:141] libmachine: (embed-certs-778109) Calling .GetSSHUsername
	I0505 22:27:29.983857   66662 sshutil.go:53] new ssh client: &{IP:192.168.72.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/embed-certs-778109/id_rsa Username:docker}
	I0505 22:27:30.120710   66662 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0505 22:27:30.141390   66662 node_ready.go:35] waiting up to 6m0s for node "embed-certs-778109" to be "Ready" ...
	I0505 22:27:30.211423   66662 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0505 22:27:30.211447   66662 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0505 22:27:30.223192   66662 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0505 22:27:30.249924   66662 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0505 22:27:30.249948   66662 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0505 22:27:30.308544   66662 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0505 22:27:30.308577   66662 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0505 22:27:30.332442   66662 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0505 22:27:30.371407   66662 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0505 22:27:30.591071   66662 main.go:141] libmachine: Making call to close driver server
	I0505 22:27:30.591095   66662 main.go:141] libmachine: (embed-certs-778109) Calling .Close
	I0505 22:27:30.591377   66662 main.go:141] libmachine: (embed-certs-778109) DBG | Closing plugin on server side
	I0505 22:27:30.591409   66662 main.go:141] libmachine: Successfully made call to close driver server
	I0505 22:27:30.591424   66662 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 22:27:30.591445   66662 main.go:141] libmachine: Making call to close driver server
	I0505 22:27:30.591458   66662 main.go:141] libmachine: (embed-certs-778109) Calling .Close
	I0505 22:27:30.591679   66662 main.go:141] libmachine: (embed-certs-778109) DBG | Closing plugin on server side
	I0505 22:27:30.591710   66662 main.go:141] libmachine: Successfully made call to close driver server
	I0505 22:27:30.591721   66662 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 22:27:30.598502   66662 main.go:141] libmachine: Making call to close driver server
	I0505 22:27:30.598523   66662 main.go:141] libmachine: (embed-certs-778109) Calling .Close
	I0505 22:27:30.598816   66662 main.go:141] libmachine: (embed-certs-778109) DBG | Closing plugin on server side
	I0505 22:27:30.598816   66662 main.go:141] libmachine: Successfully made call to close driver server
	I0505 22:27:30.598842   66662 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 22:27:31.326430   66662 main.go:141] libmachine: Making call to close driver server
	I0505 22:27:31.326454   66662 main.go:141] libmachine: (embed-certs-778109) Calling .Close
	I0505 22:27:31.326488   66662 main.go:141] libmachine: Making call to close driver server
	I0505 22:27:31.326507   66662 main.go:141] libmachine: (embed-certs-778109) Calling .Close
	I0505 22:27:31.326841   66662 main.go:141] libmachine: Successfully made call to close driver server
	I0505 22:27:31.326860   66662 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 22:27:31.326869   66662 main.go:141] libmachine: Making call to close driver server
	I0505 22:27:31.326869   66662 main.go:141] libmachine: (embed-certs-778109) DBG | Closing plugin on server side
	I0505 22:27:31.326877   66662 main.go:141] libmachine: (embed-certs-778109) Calling .Close
	I0505 22:27:31.326930   66662 main.go:141] libmachine: (embed-certs-778109) DBG | Closing plugin on server side
	I0505 22:27:31.326937   66662 main.go:141] libmachine: Successfully made call to close driver server
	I0505 22:27:31.326944   66662 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 22:27:31.326953   66662 main.go:141] libmachine: Making call to close driver server
	I0505 22:27:31.326961   66662 main.go:141] libmachine: (embed-certs-778109) Calling .Close
	I0505 22:27:31.327128   66662 main.go:141] libmachine: Successfully made call to close driver server
	I0505 22:27:31.327142   66662 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 22:27:31.327152   66662 addons.go:475] Verifying addon metrics-server=true in "embed-certs-778109"
	I0505 22:27:31.327183   66662 main.go:141] libmachine: Successfully made call to close driver server
	I0505 22:27:31.327200   66662 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 22:27:31.329395   66662 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0505 22:27:31.330841   66662 addons.go:510] duration metric: took 1.413825638s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0505 22:27:32.145345   66662 node_ready.go:53] node "embed-certs-778109" has status "Ready":"False"
	I0505 22:27:34.145526   66662 node_ready.go:53] node "embed-certs-778109" has status "Ready":"False"
	I0505 22:27:30.272134   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:30.772703   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:31.272656   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:31.772111   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:32.272248   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:32.771844   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:33.272183   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:33.772572   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:34.272522   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:34.772700   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:34.863700   65347 main.go:141] libmachine: (no-preload-112135) DBG | domain no-preload-112135 has defined MAC address 52:54:00:fe:c4:2d in network mk-no-preload-112135
	I0505 22:27:34.864231   65347 main.go:141] libmachine: (no-preload-112135) DBG | domain no-preload-112135 has current primary IP address 192.168.61.167 and MAC address 52:54:00:fe:c4:2d in network mk-no-preload-112135
	I0505 22:27:34.864244   65347 main.go:141] libmachine: (no-preload-112135) Found IP for machine: 192.168.61.167
	I0505 22:27:34.864253   65347 main.go:141] libmachine: (no-preload-112135) Reserving static IP address...
	I0505 22:27:34.864710   65347 main.go:141] libmachine: (no-preload-112135) Reserved static IP address: 192.168.61.167
	I0505 22:27:34.864738   65347 main.go:141] libmachine: (no-preload-112135) DBG | found host DHCP lease matching {name: "no-preload-112135", mac: "52:54:00:fe:c4:2d", ip: "192.168.61.167"} in network mk-no-preload-112135: {Iface:virbr3 ExpiryTime:2024-05-05 23:27:26 +0000 UTC Type:0 Mac:52:54:00:fe:c4:2d Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:no-preload-112135 Clientid:01:52:54:00:fe:c4:2d}
	I0505 22:27:34.864745   65347 main.go:141] libmachine: (no-preload-112135) Waiting for SSH to be available...
	I0505 22:27:34.864766   65347 main.go:141] libmachine: (no-preload-112135) DBG | skip adding static IP to network mk-no-preload-112135 - found existing host DHCP lease matching {name: "no-preload-112135", mac: "52:54:00:fe:c4:2d", ip: "192.168.61.167"}
	I0505 22:27:34.864775   65347 main.go:141] libmachine: (no-preload-112135) DBG | Getting to WaitForSSH function...
	I0505 22:27:34.866638   65347 main.go:141] libmachine: (no-preload-112135) DBG | domain no-preload-112135 has defined MAC address 52:54:00:fe:c4:2d in network mk-no-preload-112135
	I0505 22:27:34.866944   65347 main.go:141] libmachine: (no-preload-112135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:c4:2d", ip: ""} in network mk-no-preload-112135: {Iface:virbr3 ExpiryTime:2024-05-05 23:27:26 +0000 UTC Type:0 Mac:52:54:00:fe:c4:2d Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:no-preload-112135 Clientid:01:52:54:00:fe:c4:2d}
	I0505 22:27:34.866967   65347 main.go:141] libmachine: (no-preload-112135) DBG | domain no-preload-112135 has defined IP address 192.168.61.167 and MAC address 52:54:00:fe:c4:2d in network mk-no-preload-112135
	I0505 22:27:34.867132   65347 main.go:141] libmachine: (no-preload-112135) DBG | Using SSH client type: external
	I0505 22:27:34.867162   65347 main.go:141] libmachine: (no-preload-112135) DBG | Using SSH private key: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/no-preload-112135/id_rsa (-rw-------)
	I0505 22:27:34.867199   65347 main.go:141] libmachine: (no-preload-112135) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.167 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18602-11466/.minikube/machines/no-preload-112135/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0505 22:27:34.867213   65347 main.go:141] libmachine: (no-preload-112135) DBG | About to run SSH command:
	I0505 22:27:34.867240   65347 main.go:141] libmachine: (no-preload-112135) DBG | exit 0
	I0505 22:27:34.991847   65347 main.go:141] libmachine: (no-preload-112135) DBG | SSH cmd err, output: <nil>: 
	I0505 22:27:34.992274   65347 main.go:141] libmachine: (no-preload-112135) Calling .GetConfigRaw
	I0505 22:27:34.993043   65347 main.go:141] libmachine: (no-preload-112135) Calling .GetIP
	I0505 22:27:34.995779   65347 main.go:141] libmachine: (no-preload-112135) DBG | domain no-preload-112135 has defined MAC address 52:54:00:fe:c4:2d in network mk-no-preload-112135
	I0505 22:27:34.996082   65347 main.go:141] libmachine: (no-preload-112135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:c4:2d", ip: ""} in network mk-no-preload-112135: {Iface:virbr3 ExpiryTime:2024-05-05 23:27:26 +0000 UTC Type:0 Mac:52:54:00:fe:c4:2d Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:no-preload-112135 Clientid:01:52:54:00:fe:c4:2d}
	I0505 22:27:34.996118   65347 main.go:141] libmachine: (no-preload-112135) DBG | domain no-preload-112135 has defined IP address 192.168.61.167 and MAC address 52:54:00:fe:c4:2d in network mk-no-preload-112135
	I0505 22:27:34.996377   65347 profile.go:143] Saving config to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/no-preload-112135/config.json ...
	I0505 22:27:34.996547   65347 machine.go:94] provisionDockerMachine start ...
	I0505 22:27:34.996564   65347 main.go:141] libmachine: (no-preload-112135) Calling .DriverName
	I0505 22:27:34.996755   65347 main.go:141] libmachine: (no-preload-112135) Calling .GetSSHHostname
	I0505 22:27:34.999522   65347 main.go:141] libmachine: (no-preload-112135) DBG | domain no-preload-112135 has defined MAC address 52:54:00:fe:c4:2d in network mk-no-preload-112135
	I0505 22:27:34.999926   65347 main.go:141] libmachine: (no-preload-112135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:c4:2d", ip: ""} in network mk-no-preload-112135: {Iface:virbr3 ExpiryTime:2024-05-05 23:27:26 +0000 UTC Type:0 Mac:52:54:00:fe:c4:2d Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:no-preload-112135 Clientid:01:52:54:00:fe:c4:2d}
	I0505 22:27:34.999957   65347 main.go:141] libmachine: (no-preload-112135) DBG | domain no-preload-112135 has defined IP address 192.168.61.167 and MAC address 52:54:00:fe:c4:2d in network mk-no-preload-112135
	I0505 22:27:35.000175   65347 main.go:141] libmachine: (no-preload-112135) Calling .GetSSHPort
	I0505 22:27:35.000381   65347 main.go:141] libmachine: (no-preload-112135) Calling .GetSSHKeyPath
	I0505 22:27:35.000568   65347 main.go:141] libmachine: (no-preload-112135) Calling .GetSSHKeyPath
	I0505 22:27:35.000733   65347 main.go:141] libmachine: (no-preload-112135) Calling .GetSSHUsername
	I0505 22:27:35.000919   65347 main.go:141] libmachine: Using SSH client type: native
	I0505 22:27:35.001131   65347 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.167 22 <nil> <nil>}
	I0505 22:27:35.001146   65347 main.go:141] libmachine: About to run SSH command:
	hostname
	I0505 22:27:35.104683   65347 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0505 22:27:35.104714   65347 main.go:141] libmachine: (no-preload-112135) Calling .GetMachineName
	I0505 22:27:35.104967   65347 buildroot.go:166] provisioning hostname "no-preload-112135"
	I0505 22:27:35.104989   65347 main.go:141] libmachine: (no-preload-112135) Calling .GetMachineName
	I0505 22:27:35.105160   65347 main.go:141] libmachine: (no-preload-112135) Calling .GetSSHHostname
	I0505 22:27:35.108182   65347 main.go:141] libmachine: (no-preload-112135) DBG | domain no-preload-112135 has defined MAC address 52:54:00:fe:c4:2d in network mk-no-preload-112135
	I0505 22:27:35.108572   65347 main.go:141] libmachine: (no-preload-112135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:c4:2d", ip: ""} in network mk-no-preload-112135: {Iface:virbr3 ExpiryTime:2024-05-05 23:27:26 +0000 UTC Type:0 Mac:52:54:00:fe:c4:2d Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:no-preload-112135 Clientid:01:52:54:00:fe:c4:2d}
	I0505 22:27:35.108603   65347 main.go:141] libmachine: (no-preload-112135) DBG | domain no-preload-112135 has defined IP address 192.168.61.167 and MAC address 52:54:00:fe:c4:2d in network mk-no-preload-112135
	I0505 22:27:35.108766   65347 main.go:141] libmachine: (no-preload-112135) Calling .GetSSHPort
	I0505 22:27:35.108983   65347 main.go:141] libmachine: (no-preload-112135) Calling .GetSSHKeyPath
	I0505 22:27:35.109134   65347 main.go:141] libmachine: (no-preload-112135) Calling .GetSSHKeyPath
	I0505 22:27:35.109289   65347 main.go:141] libmachine: (no-preload-112135) Calling .GetSSHUsername
	I0505 22:27:35.109441   65347 main.go:141] libmachine: Using SSH client type: native
	I0505 22:27:35.109658   65347 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.167 22 <nil> <nil>}
	I0505 22:27:35.109678   65347 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-112135 && echo "no-preload-112135" | sudo tee /etc/hostname
	I0505 22:27:35.238473   65347 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-112135
	
	I0505 22:27:35.238510   65347 main.go:141] libmachine: (no-preload-112135) Calling .GetSSHHostname
	I0505 22:27:35.241433   65347 main.go:141] libmachine: (no-preload-112135) DBG | domain no-preload-112135 has defined MAC address 52:54:00:fe:c4:2d in network mk-no-preload-112135
	I0505 22:27:35.241855   65347 main.go:141] libmachine: (no-preload-112135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:c4:2d", ip: ""} in network mk-no-preload-112135: {Iface:virbr3 ExpiryTime:2024-05-05 23:27:26 +0000 UTC Type:0 Mac:52:54:00:fe:c4:2d Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:no-preload-112135 Clientid:01:52:54:00:fe:c4:2d}
	I0505 22:27:35.241913   65347 main.go:141] libmachine: (no-preload-112135) DBG | domain no-preload-112135 has defined IP address 192.168.61.167 and MAC address 52:54:00:fe:c4:2d in network mk-no-preload-112135
	I0505 22:27:35.242014   65347 main.go:141] libmachine: (no-preload-112135) Calling .GetSSHPort
	I0505 22:27:35.242200   65347 main.go:141] libmachine: (no-preload-112135) Calling .GetSSHKeyPath
	I0505 22:27:35.242386   65347 main.go:141] libmachine: (no-preload-112135) Calling .GetSSHKeyPath
	I0505 22:27:35.242525   65347 main.go:141] libmachine: (no-preload-112135) Calling .GetSSHUsername
	I0505 22:27:35.242726   65347 main.go:141] libmachine: Using SSH client type: native
	I0505 22:27:35.242896   65347 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.167 22 <nil> <nil>}
	I0505 22:27:35.242918   65347 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-112135' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-112135/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-112135' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0505 22:27:35.357917   65347 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0505 22:27:35.357954   65347 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18602-11466/.minikube CaCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18602-11466/.minikube}
	I0505 22:27:35.357980   65347 buildroot.go:174] setting up certificates
	I0505 22:27:35.357992   65347 provision.go:84] configureAuth start
	I0505 22:27:35.358003   65347 main.go:141] libmachine: (no-preload-112135) Calling .GetMachineName
	I0505 22:27:35.358337   65347 main.go:141] libmachine: (no-preload-112135) Calling .GetIP
	I0505 22:27:35.361273   65347 main.go:141] libmachine: (no-preload-112135) DBG | domain no-preload-112135 has defined MAC address 52:54:00:fe:c4:2d in network mk-no-preload-112135
	I0505 22:27:35.361679   65347 main.go:141] libmachine: (no-preload-112135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:c4:2d", ip: ""} in network mk-no-preload-112135: {Iface:virbr3 ExpiryTime:2024-05-05 23:27:26 +0000 UTC Type:0 Mac:52:54:00:fe:c4:2d Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:no-preload-112135 Clientid:01:52:54:00:fe:c4:2d}
	I0505 22:27:35.361710   65347 main.go:141] libmachine: (no-preload-112135) DBG | domain no-preload-112135 has defined IP address 192.168.61.167 and MAC address 52:54:00:fe:c4:2d in network mk-no-preload-112135
	I0505 22:27:35.361922   65347 main.go:141] libmachine: (no-preload-112135) Calling .GetSSHHostname
	I0505 22:27:35.364525   65347 main.go:141] libmachine: (no-preload-112135) DBG | domain no-preload-112135 has defined MAC address 52:54:00:fe:c4:2d in network mk-no-preload-112135
	I0505 22:27:35.364824   65347 main.go:141] libmachine: (no-preload-112135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:c4:2d", ip: ""} in network mk-no-preload-112135: {Iface:virbr3 ExpiryTime:2024-05-05 23:27:26 +0000 UTC Type:0 Mac:52:54:00:fe:c4:2d Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:no-preload-112135 Clientid:01:52:54:00:fe:c4:2d}
	I0505 22:27:35.364850   65347 main.go:141] libmachine: (no-preload-112135) DBG | domain no-preload-112135 has defined IP address 192.168.61.167 and MAC address 52:54:00:fe:c4:2d in network mk-no-preload-112135
	I0505 22:27:35.365028   65347 provision.go:143] copyHostCerts
	I0505 22:27:35.365082   65347 exec_runner.go:144] found /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem, removing ...
	I0505 22:27:35.365103   65347 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem
	I0505 22:27:35.365153   65347 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/ca.pem (1078 bytes)
	I0505 22:27:35.365259   65347 exec_runner.go:144] found /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem, removing ...
	I0505 22:27:35.365279   65347 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem
	I0505 22:27:35.365314   65347 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/cert.pem (1123 bytes)
	I0505 22:27:35.365393   65347 exec_runner.go:144] found /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem, removing ...
	I0505 22:27:35.365403   65347 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem
	I0505 22:27:35.365434   65347 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18602-11466/.minikube/key.pem (1675 bytes)
	I0505 22:27:35.365508   65347 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem org=jenkins.no-preload-112135 san=[127.0.0.1 192.168.61.167 localhost minikube no-preload-112135]
	I0505 22:27:35.640883   65347 provision.go:177] copyRemoteCerts
	I0505 22:27:35.640941   65347 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0505 22:27:35.640964   65347 main.go:141] libmachine: (no-preload-112135) Calling .GetSSHHostname
	I0505 22:27:35.643971   65347 main.go:141] libmachine: (no-preload-112135) DBG | domain no-preload-112135 has defined MAC address 52:54:00:fe:c4:2d in network mk-no-preload-112135
	I0505 22:27:35.644357   65347 main.go:141] libmachine: (no-preload-112135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:c4:2d", ip: ""} in network mk-no-preload-112135: {Iface:virbr3 ExpiryTime:2024-05-05 23:27:26 +0000 UTC Type:0 Mac:52:54:00:fe:c4:2d Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:no-preload-112135 Clientid:01:52:54:00:fe:c4:2d}
	I0505 22:27:35.644390   65347 main.go:141] libmachine: (no-preload-112135) DBG | domain no-preload-112135 has defined IP address 192.168.61.167 and MAC address 52:54:00:fe:c4:2d in network mk-no-preload-112135
	I0505 22:27:35.644547   65347 main.go:141] libmachine: (no-preload-112135) Calling .GetSSHPort
	I0505 22:27:35.644747   65347 main.go:141] libmachine: (no-preload-112135) Calling .GetSSHKeyPath
	I0505 22:27:35.644907   65347 main.go:141] libmachine: (no-preload-112135) Calling .GetSSHUsername
	I0505 22:27:35.645026   65347 sshutil.go:53] new ssh client: &{IP:192.168.61.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/no-preload-112135/id_rsa Username:docker}
	I0505 22:27:35.731220   65347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0505 22:27:35.760455   65347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0505 22:27:35.790410   65347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0505 22:27:35.819085   65347 provision.go:87] duration metric: took 461.075683ms to configureAuth
	I0505 22:27:35.819120   65347 buildroot.go:189] setting minikube options for container-runtime
	I0505 22:27:35.819303   65347 config.go:182] Loaded profile config "no-preload-112135": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 22:27:35.819398   65347 main.go:141] libmachine: (no-preload-112135) Calling .GetSSHHostname
	I0505 22:27:35.822019   65347 main.go:141] libmachine: (no-preload-112135) DBG | domain no-preload-112135 has defined MAC address 52:54:00:fe:c4:2d in network mk-no-preload-112135
	I0505 22:27:35.822361   65347 main.go:141] libmachine: (no-preload-112135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:c4:2d", ip: ""} in network mk-no-preload-112135: {Iface:virbr3 ExpiryTime:2024-05-05 23:27:26 +0000 UTC Type:0 Mac:52:54:00:fe:c4:2d Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:no-preload-112135 Clientid:01:52:54:00:fe:c4:2d}
	I0505 22:27:35.822392   65347 main.go:141] libmachine: (no-preload-112135) DBG | domain no-preload-112135 has defined IP address 192.168.61.167 and MAC address 52:54:00:fe:c4:2d in network mk-no-preload-112135
	I0505 22:27:35.822581   65347 main.go:141] libmachine: (no-preload-112135) Calling .GetSSHPort
	I0505 22:27:35.822760   65347 main.go:141] libmachine: (no-preload-112135) Calling .GetSSHKeyPath
	I0505 22:27:35.822912   65347 main.go:141] libmachine: (no-preload-112135) Calling .GetSSHKeyPath
	I0505 22:27:35.823050   65347 main.go:141] libmachine: (no-preload-112135) Calling .GetSSHUsername
	I0505 22:27:35.823223   65347 main.go:141] libmachine: Using SSH client type: native
	I0505 22:27:35.823433   65347 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.167 22 <nil> <nil>}
	I0505 22:27:35.823455   65347 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0505 22:27:36.113524   65347 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0505 22:27:36.113552   65347 machine.go:97] duration metric: took 1.116993575s to provisionDockerMachine
	I0505 22:27:36.113566   65347 start.go:293] postStartSetup for "no-preload-112135" (driver="kvm2")
	I0505 22:27:36.113580   65347 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0505 22:27:36.113600   65347 main.go:141] libmachine: (no-preload-112135) Calling .DriverName
	I0505 22:27:36.113940   65347 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0505 22:27:36.113976   65347 main.go:141] libmachine: (no-preload-112135) Calling .GetSSHHostname
	I0505 22:27:36.116933   65347 main.go:141] libmachine: (no-preload-112135) DBG | domain no-preload-112135 has defined MAC address 52:54:00:fe:c4:2d in network mk-no-preload-112135
	I0505 22:27:36.117296   65347 main.go:141] libmachine: (no-preload-112135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:c4:2d", ip: ""} in network mk-no-preload-112135: {Iface:virbr3 ExpiryTime:2024-05-05 23:27:26 +0000 UTC Type:0 Mac:52:54:00:fe:c4:2d Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:no-preload-112135 Clientid:01:52:54:00:fe:c4:2d}
	I0505 22:27:36.117322   65347 main.go:141] libmachine: (no-preload-112135) DBG | domain no-preload-112135 has defined IP address 192.168.61.167 and MAC address 52:54:00:fe:c4:2d in network mk-no-preload-112135
	I0505 22:27:36.117512   65347 main.go:141] libmachine: (no-preload-112135) Calling .GetSSHPort
	I0505 22:27:36.117689   65347 main.go:141] libmachine: (no-preload-112135) Calling .GetSSHKeyPath
	I0505 22:27:36.117883   65347 main.go:141] libmachine: (no-preload-112135) Calling .GetSSHUsername
	I0505 22:27:36.118029   65347 sshutil.go:53] new ssh client: &{IP:192.168.61.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/no-preload-112135/id_rsa Username:docker}
	I0505 22:27:36.200492   65347 ssh_runner.go:195] Run: cat /etc/os-release
	I0505 22:27:36.205280   65347 info.go:137] Remote host: Buildroot 2023.02.9
	I0505 22:27:36.205306   65347 filesync.go:126] Scanning /home/jenkins/minikube-integration/18602-11466/.minikube/addons for local assets ...
	I0505 22:27:36.205378   65347 filesync.go:126] Scanning /home/jenkins/minikube-integration/18602-11466/.minikube/files for local assets ...
	I0505 22:27:36.205451   65347 filesync.go:149] local asset: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem -> 187982.pem in /etc/ssl/certs
	I0505 22:27:36.205548   65347 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0505 22:27:36.216563   65347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem --> /etc/ssl/certs/187982.pem (1708 bytes)
	I0505 22:27:36.241911   65347 start.go:296] duration metric: took 128.331903ms for postStartSetup
	I0505 22:27:36.241950   65347 fix.go:56] duration metric: took 23.008372843s for fixHost
	I0505 22:27:36.241968   65347 main.go:141] libmachine: (no-preload-112135) Calling .GetSSHHostname
	I0505 22:27:36.244424   65347 main.go:141] libmachine: (no-preload-112135) DBG | domain no-preload-112135 has defined MAC address 52:54:00:fe:c4:2d in network mk-no-preload-112135
	I0505 22:27:36.244789   65347 main.go:141] libmachine: (no-preload-112135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:c4:2d", ip: ""} in network mk-no-preload-112135: {Iface:virbr3 ExpiryTime:2024-05-05 23:27:26 +0000 UTC Type:0 Mac:52:54:00:fe:c4:2d Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:no-preload-112135 Clientid:01:52:54:00:fe:c4:2d}
	I0505 22:27:36.244815   65347 main.go:141] libmachine: (no-preload-112135) DBG | domain no-preload-112135 has defined IP address 192.168.61.167 and MAC address 52:54:00:fe:c4:2d in network mk-no-preload-112135
	I0505 22:27:36.244935   65347 main.go:141] libmachine: (no-preload-112135) Calling .GetSSHPort
	I0505 22:27:36.245113   65347 main.go:141] libmachine: (no-preload-112135) Calling .GetSSHKeyPath
	I0505 22:27:36.245266   65347 main.go:141] libmachine: (no-preload-112135) Calling .GetSSHKeyPath
	I0505 22:27:36.245395   65347 main.go:141] libmachine: (no-preload-112135) Calling .GetSSHUsername
	I0505 22:27:36.245561   65347 main.go:141] libmachine: Using SSH client type: native
	I0505 22:27:36.245776   65347 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.167 22 <nil> <nil>}
	I0505 22:27:36.245789   65347 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0505 22:27:36.357119   65347 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714948056.340397299
	
	I0505 22:27:36.357148   65347 fix.go:216] guest clock: 1714948056.340397299
	I0505 22:27:36.357155   65347 fix.go:229] Guest: 2024-05-05 22:27:36.340397299 +0000 UTC Remote: 2024-05-05 22:27:36.241953303 +0000 UTC m=+345.550454250 (delta=98.443996ms)
	I0505 22:27:36.357174   65347 fix.go:200] guest clock delta is within tolerance: 98.443996ms
	I0505 22:27:36.357178   65347 start.go:83] releasing machines lock for "no-preload-112135", held for 23.123647501s
	I0505 22:27:36.357196   65347 main.go:141] libmachine: (no-preload-112135) Calling .DriverName
	I0505 22:27:36.357448   65347 main.go:141] libmachine: (no-preload-112135) Calling .GetIP
	I0505 22:27:36.360643   65347 main.go:141] libmachine: (no-preload-112135) DBG | domain no-preload-112135 has defined MAC address 52:54:00:fe:c4:2d in network mk-no-preload-112135
	I0505 22:27:36.361001   65347 main.go:141] libmachine: (no-preload-112135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:c4:2d", ip: ""} in network mk-no-preload-112135: {Iface:virbr3 ExpiryTime:2024-05-05 23:27:26 +0000 UTC Type:0 Mac:52:54:00:fe:c4:2d Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:no-preload-112135 Clientid:01:52:54:00:fe:c4:2d}
	I0505 22:27:36.361029   65347 main.go:141] libmachine: (no-preload-112135) DBG | domain no-preload-112135 has defined IP address 192.168.61.167 and MAC address 52:54:00:fe:c4:2d in network mk-no-preload-112135
	I0505 22:27:36.361168   65347 main.go:141] libmachine: (no-preload-112135) Calling .DriverName
	I0505 22:27:36.361725   65347 main.go:141] libmachine: (no-preload-112135) Calling .DriverName
	I0505 22:27:36.361922   65347 main.go:141] libmachine: (no-preload-112135) Calling .DriverName
	I0505 22:27:36.362020   65347 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0505 22:27:36.362070   65347 main.go:141] libmachine: (no-preload-112135) Calling .GetSSHHostname
	I0505 22:27:36.362170   65347 ssh_runner.go:195] Run: cat /version.json
	I0505 22:27:36.362197   65347 main.go:141] libmachine: (no-preload-112135) Calling .GetSSHHostname
	I0505 22:27:36.364883   65347 main.go:141] libmachine: (no-preload-112135) DBG | domain no-preload-112135 has defined MAC address 52:54:00:fe:c4:2d in network mk-no-preload-112135
	I0505 22:27:36.364907   65347 main.go:141] libmachine: (no-preload-112135) DBG | domain no-preload-112135 has defined MAC address 52:54:00:fe:c4:2d in network mk-no-preload-112135
	I0505 22:27:36.365277   65347 main.go:141] libmachine: (no-preload-112135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:c4:2d", ip: ""} in network mk-no-preload-112135: {Iface:virbr3 ExpiryTime:2024-05-05 23:27:26 +0000 UTC Type:0 Mac:52:54:00:fe:c4:2d Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:no-preload-112135 Clientid:01:52:54:00:fe:c4:2d}
	I0505 22:27:36.365329   65347 main.go:141] libmachine: (no-preload-112135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:c4:2d", ip: ""} in network mk-no-preload-112135: {Iface:virbr3 ExpiryTime:2024-05-05 23:27:26 +0000 UTC Type:0 Mac:52:54:00:fe:c4:2d Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:no-preload-112135 Clientid:01:52:54:00:fe:c4:2d}
	I0505 22:27:36.365366   65347 main.go:141] libmachine: (no-preload-112135) DBG | domain no-preload-112135 has defined IP address 192.168.61.167 and MAC address 52:54:00:fe:c4:2d in network mk-no-preload-112135
	I0505 22:27:36.365391   65347 main.go:141] libmachine: (no-preload-112135) DBG | domain no-preload-112135 has defined IP address 192.168.61.167 and MAC address 52:54:00:fe:c4:2d in network mk-no-preload-112135
	I0505 22:27:36.365538   65347 main.go:141] libmachine: (no-preload-112135) Calling .GetSSHPort
	I0505 22:27:36.365631   65347 main.go:141] libmachine: (no-preload-112135) Calling .GetSSHPort
	I0505 22:27:36.365720   65347 main.go:141] libmachine: (no-preload-112135) Calling .GetSSHKeyPath
	I0505 22:27:36.365790   65347 main.go:141] libmachine: (no-preload-112135) Calling .GetSSHKeyPath
	I0505 22:27:36.365855   65347 main.go:141] libmachine: (no-preload-112135) Calling .GetSSHUsername
	I0505 22:27:36.365944   65347 main.go:141] libmachine: (no-preload-112135) Calling .GetSSHUsername
	I0505 22:27:36.365998   65347 sshutil.go:53] new ssh client: &{IP:192.168.61.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/no-preload-112135/id_rsa Username:docker}
	I0505 22:27:36.366070   65347 sshutil.go:53] new ssh client: &{IP:192.168.61.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/no-preload-112135/id_rsa Username:docker}
	I0505 22:27:36.461987   65347 ssh_runner.go:195] Run: systemctl --version
	I0505 22:27:36.469411   65347 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0505 22:27:36.620722   65347 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0505 22:27:36.628052   65347 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0505 22:27:36.628114   65347 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0505 22:27:36.645411   65347 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0505 22:27:36.645453   65347 start.go:494] detecting cgroup driver to use...
	I0505 22:27:36.645527   65347 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0505 22:27:36.662593   65347 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0505 22:27:36.679281   65347 docker.go:217] disabling cri-docker service (if available) ...
	I0505 22:27:36.679364   65347 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0505 22:27:36.694430   65347 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0505 22:27:36.711257   65347 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0505 22:27:36.834080   65347 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0505 22:27:37.028978   65347 docker.go:233] disabling docker service ...
	I0505 22:27:37.029059   65347 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0505 22:27:37.045675   65347 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0505 22:27:37.060077   65347 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0505 22:27:37.201363   65347 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0505 22:27:37.335976   65347 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0505 22:27:37.351195   65347 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0505 22:27:37.371363   65347 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0505 22:27:37.371430   65347 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 22:27:37.382503   65347 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0505 22:27:37.382565   65347 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 22:27:37.393723   65347 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 22:27:37.405323   65347 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 22:27:37.418013   65347 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0505 22:27:37.430812   65347 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 22:27:37.443016   65347 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 22:27:37.466159   65347 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0505 22:27:37.483091   65347 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0505 22:27:37.496819   65347 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0505 22:27:37.496881   65347 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0505 22:27:37.511317   65347 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0505 22:27:37.523334   65347 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 22:27:37.674093   65347 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0505 22:27:37.837500   65347 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0505 22:27:37.837586   65347 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0505 22:27:37.844245   65347 start.go:562] Will wait 60s for crictl version
	I0505 22:27:37.844321   65347 ssh_runner.go:195] Run: which crictl
	I0505 22:27:37.849084   65347 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0505 22:27:37.897587   65347 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0505 22:27:37.897668   65347 ssh_runner.go:195] Run: crio --version
	I0505 22:27:37.933709   65347 ssh_runner.go:195] Run: crio --version
	I0505 22:27:37.970772   65347 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0505 22:27:36.646115   66662 node_ready.go:53] node "embed-certs-778109" has status "Ready":"False"
	I0505 22:27:37.644663   66662 node_ready.go:49] node "embed-certs-778109" has status "Ready":"True"
	I0505 22:27:37.644687   66662 node_ready.go:38] duration metric: took 7.503260028s for node "embed-certs-778109" to be "Ready" ...
	I0505 22:27:37.644697   66662 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0505 22:27:37.652868   66662 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fr99d" in "kube-system" namespace to be "Ready" ...
	I0505 22:27:37.660641   66662 pod_ready.go:92] pod "coredns-7db6d8ff4d-fr99d" in "kube-system" namespace has status "Ready":"True"
	I0505 22:27:37.660662   66662 pod_ready.go:81] duration metric: took 7.767133ms for pod "coredns-7db6d8ff4d-fr99d" in "kube-system" namespace to be "Ready" ...
	I0505 22:27:37.660671   66662 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-778109" in "kube-system" namespace to be "Ready" ...
	I0505 22:27:39.668341   66662 pod_ready.go:102] pod "etcd-embed-certs-778109" in "kube-system" namespace has status "Ready":"False"
	I0505 22:27:35.272096   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:35.772684   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:36.272172   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:36.772724   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:37.272191   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:37.771933   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:38.272510   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:38.772699   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:39.272239   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:39.771995   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:37.972139   65347 main.go:141] libmachine: (no-preload-112135) Calling .GetIP
	I0505 22:27:37.974900   65347 main.go:141] libmachine: (no-preload-112135) DBG | domain no-preload-112135 has defined MAC address 52:54:00:fe:c4:2d in network mk-no-preload-112135
	I0505 22:27:37.975233   65347 main.go:141] libmachine: (no-preload-112135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:c4:2d", ip: ""} in network mk-no-preload-112135: {Iface:virbr3 ExpiryTime:2024-05-05 23:27:26 +0000 UTC Type:0 Mac:52:54:00:fe:c4:2d Iaid: IPaddr:192.168.61.167 Prefix:24 Hostname:no-preload-112135 Clientid:01:52:54:00:fe:c4:2d}
	I0505 22:27:37.975264   65347 main.go:141] libmachine: (no-preload-112135) DBG | domain no-preload-112135 has defined IP address 192.168.61.167 and MAC address 52:54:00:fe:c4:2d in network mk-no-preload-112135
	I0505 22:27:37.975497   65347 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0505 22:27:37.980301   65347 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0505 22:27:37.996122   65347 kubeadm.go:877] updating cluster {Name:no-preload-112135 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:no-preload-112135 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.167 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0505 22:27:37.996242   65347 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0505 22:27:37.996275   65347 ssh_runner.go:195] Run: sudo crictl images --output json
	I0505 22:27:38.042512   65347 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0505 22:27:38.042543   65347 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0 registry.k8s.io/kube-controller-manager:v1.30.0 registry.k8s.io/kube-scheduler:v1.30.0 registry.k8s.io/kube-proxy:v1.30.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0505 22:27:38.042613   65347 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0505 22:27:38.042652   65347 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0
	I0505 22:27:38.042680   65347 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0505 22:27:38.042685   65347 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0505 22:27:38.042698   65347 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0505 22:27:38.042629   65347 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0
	I0505 22:27:38.042812   65347 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0
	I0505 22:27:38.042638   65347 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0505 22:27:38.044180   65347 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0505 22:27:38.044212   65347 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0505 22:27:38.044220   65347 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0
	I0505 22:27:38.044221   65347 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0505 22:27:38.044181   65347 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0505 22:27:38.044220   65347 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0
	I0505 22:27:38.044237   65347 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0
	I0505 22:27:38.044266   65347 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0505 22:27:38.197681   65347 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0
	I0505 22:27:38.246720   65347 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0" does not exist at hash "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b" in container runtime
	I0505 22:27:38.246765   65347 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0
	I0505 22:27:38.246800   65347 ssh_runner.go:195] Run: which crictl
	I0505 22:27:38.251345   65347 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0
	I0505 22:27:38.256227   65347 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0
	I0505 22:27:38.306805   65347 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18602-11466/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0
	I0505 22:27:38.306896   65347 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0
	I0505 22:27:38.313588   65347 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0" does not exist at hash "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced" in container runtime
	I0505 22:27:38.313631   65347 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0
	I0505 22:27:38.313676   65347 ssh_runner.go:195] Run: which crictl
	I0505 22:27:38.316244   65347 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0 (exists)
	I0505 22:27:38.316265   65347 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0
	I0505 22:27:38.316310   65347 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0
	I0505 22:27:38.319713   65347 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0
	I0505 22:27:38.374482   65347 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0505 22:27:38.375663   65347 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0505 22:27:38.375918   65347 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0
	I0505 22:27:38.385471   65347 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0505 22:27:38.407580   65347 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0
	I0505 22:27:38.937793   65347 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0505 22:27:40.604306   65347 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0: (2.287968679s)
	I0505 22:27:40.604340   65347 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18602-11466/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0 from cache
	I0505 22:27:40.604315   65347 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0: (2.284575652s)
	I0505 22:27:40.604389   65347 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0: (2.229856358s)
	I0505 22:27:40.604429   65347 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9: (2.22874231s)
	I0505 22:27:40.604429   65347 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0505 22:27:40.604512   65347 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0505 22:27:40.604517   65347 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1: (2.21902863s)
	I0505 22:27:40.604533   65347 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0505 22:27:40.604550   65347 ssh_runner.go:195] Run: which crictl
	I0505 22:27:40.604397   65347 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18602-11466/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0
	I0505 22:27:40.604560   65347 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0505 22:27:40.604478   65347 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0: (2.228535786s)
	I0505 22:27:40.604618   65347 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0: (2.197008345s)
	I0505 22:27:40.604637   65347 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0505 22:27:40.604644   65347 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.666822419s)
	I0505 22:27:40.604604   65347 ssh_runner.go:195] Run: which crictl
	I0505 22:27:40.604640   65347 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0" does not exist at hash "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0" in container runtime
	I0505 22:27:40.604645   65347 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0" does not exist at hash "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b" in container runtime
	I0505 22:27:40.604669   65347 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0505 22:27:40.604674   65347 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0
	I0505 22:27:40.604683   65347 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0505 22:27:40.604684   65347 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0505 22:27:40.604708   65347 ssh_runner.go:195] Run: which crictl
	I0505 22:27:40.604720   65347 ssh_runner.go:195] Run: which crictl
	I0505 22:27:40.604721   65347 ssh_runner.go:195] Run: which crictl
	I0505 22:27:40.619829   65347 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0 (exists)
	I0505 22:27:40.619846   65347 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0505 22:27:40.619874   65347 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0505 22:27:40.619899   65347 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0505 22:27:40.619874   65347 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0505 22:27:40.619963   65347 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0
	I0505 22:27:40.619973   65347 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0
	I0505 22:27:40.619940   65347 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0505 22:27:42.168085   66662 pod_ready.go:102] pod "etcd-embed-certs-778109" in "kube-system" namespace has status "Ready":"False"
	I0505 22:27:43.167614   66662 pod_ready.go:92] pod "etcd-embed-certs-778109" in "kube-system" namespace has status "Ready":"True"
	I0505 22:27:43.167642   66662 pod_ready.go:81] duration metric: took 5.506963804s for pod "etcd-embed-certs-778109" in "kube-system" namespace to be "Ready" ...
	I0505 22:27:43.167655   66662 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-778109" in "kube-system" namespace to be "Ready" ...
	I0505 22:27:43.173798   66662 pod_ready.go:92] pod "kube-apiserver-embed-certs-778109" in "kube-system" namespace has status "Ready":"True"
	I0505 22:27:43.173825   66662 pod_ready.go:81] duration metric: took 6.154862ms for pod "kube-apiserver-embed-certs-778109" in "kube-system" namespace to be "Ready" ...
	I0505 22:27:43.173837   66662 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-778109" in "kube-system" namespace to be "Ready" ...
	I0505 22:27:43.179445   66662 pod_ready.go:92] pod "kube-controller-manager-embed-certs-778109" in "kube-system" namespace has status "Ready":"True"
	I0505 22:27:43.179464   66662 pod_ready.go:81] duration metric: took 5.619905ms for pod "kube-controller-manager-embed-certs-778109" in "kube-system" namespace to be "Ready" ...
	I0505 22:27:43.179475   66662 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8l2nn" in "kube-system" namespace to be "Ready" ...
	I0505 22:27:43.184648   66662 pod_ready.go:92] pod "kube-proxy-8l2nn" in "kube-system" namespace has status "Ready":"True"
	I0505 22:27:43.184671   66662 pod_ready.go:81] duration metric: took 5.179068ms for pod "kube-proxy-8l2nn" in "kube-system" namespace to be "Ready" ...
	I0505 22:27:43.184682   66662 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-778109" in "kube-system" namespace to be "Ready" ...
	I0505 22:27:43.190160   66662 pod_ready.go:92] pod "kube-scheduler-embed-certs-778109" in "kube-system" namespace has status "Ready":"True"
	I0505 22:27:43.190177   66662 pod_ready.go:81] duration metric: took 5.488693ms for pod "kube-scheduler-embed-certs-778109" in "kube-system" namespace to be "Ready" ...
	I0505 22:27:43.190186   66662 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace to be "Ready" ...
	I0505 22:27:40.271839   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:40.772187   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:41.272474   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:41.772789   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:42.271868   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:42.772221   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:43.272690   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:43.772691   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:44.272736   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:44.772695   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:42.811974   65347 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.191997103s)
	I0505 22:27:42.812087   65347 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18602-11466/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0505 22:27:42.812168   65347 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0: (2.192264494s)
	I0505 22:27:42.812241   65347 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18602-11466/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0 from cache
	I0505 22:27:42.812271   65347 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0: (2.192287553s)
	I0505 22:27:42.812322   65347 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18602-11466/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0
	I0505 22:27:42.812238   65347 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1: (2.192309502s)
	I0505 22:27:42.812388   65347 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18602-11466/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0505 22:27:42.812396   65347 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0505 22:27:42.812214   65347 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0505 22:27:42.812487   65347 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0505 22:27:42.812620   65347 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0: (2.19263098s)
	I0505 22:27:42.812656   65347 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0: (2.192659566s)
	I0505 22:27:42.812707   65347 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18602-11466/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0505 22:27:42.812719   65347 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18602-11466/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0
	I0505 22:27:42.812800   65347 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0505 22:27:42.812852   65347 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0505 22:27:42.819594   65347 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0 (exists)
	I0505 22:27:42.819614   65347 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0505 22:27:42.819657   65347 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0505 22:27:42.825014   65347 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0505 22:27:42.829766   65347 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0505 22:27:42.829791   65347 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0 (exists)
	I0505 22:27:42.830066   65347 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0505 22:27:45.406409   65347 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0: (2.586720989s)
	I0505 22:27:45.406445   65347 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18602-11466/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0 from cache
	I0505 22:27:45.406473   65347 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0505 22:27:45.406538   65347 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0505 22:27:45.197895   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:27:47.697862   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:27:45.272663   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:45.771800   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:46.272749   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:46.771868   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:47.272258   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:47.772595   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:48.271976   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:48.772668   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:49.272737   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:49.772721   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:47.400601   65347 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.994039239s)
	I0505 22:27:47.400628   65347 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18602-11466/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0505 22:27:47.400648   65347 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0505 22:27:47.400693   65347 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0505 22:27:50.197231   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:27:52.198095   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:27:54.200814   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:27:50.272832   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:50.771993   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:51.272116   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:51.772692   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:52.272477   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:52.772369   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:53.272119   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:53.771913   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:54.272807   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:54.771776   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:51.590996   65347 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.190264602s)
	I0505 22:27:51.591034   65347 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18602-11466/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0505 22:27:51.591070   65347 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0505 22:27:51.591128   65347 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0505 22:27:54.064120   65347 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0: (2.472961407s)
	I0505 22:27:54.064154   65347 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18602-11466/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0 from cache
	I0505 22:27:54.064175   65347 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0505 22:27:54.064233   65347 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0505 22:27:54.917763   65347 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18602-11466/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0505 22:27:54.917812   65347 cache_images.go:123] Successfully loaded all cached images
	I0505 22:27:54.917819   65347 cache_images.go:92] duration metric: took 16.875260224s to LoadCachedImages
	I0505 22:27:54.917834   65347 kubeadm.go:928] updating node { 192.168.61.167 8443 v1.30.0 crio true true} ...
	I0505 22:27:54.917975   65347 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-112135 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.167
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:no-preload-112135 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0505 22:27:54.918053   65347 ssh_runner.go:195] Run: crio config
	I0505 22:27:54.972942   65347 cni.go:84] Creating CNI manager for ""
	I0505 22:27:54.972961   65347 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0505 22:27:54.972970   65347 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0505 22:27:54.972988   65347 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.167 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-112135 NodeName:no-preload-112135 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.167"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.167 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0505 22:27:54.973116   65347 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.167
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-112135"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.167
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.167"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0505 22:27:54.973181   65347 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0505 22:27:54.985086   65347 binaries.go:44] Found k8s binaries, skipping transfer
	I0505 22:27:54.985155   65347 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0505 22:27:54.995991   65347 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0505 22:27:55.014716   65347 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0505 22:27:55.032918   65347 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0505 22:27:55.051393   65347 ssh_runner.go:195] Run: grep 192.168.61.167	control-plane.minikube.internal$ /etc/hosts
	I0505 22:27:55.055611   65347 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.167	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0505 22:27:55.070226   65347 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 22:27:55.204035   65347 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0505 22:27:55.224481   65347 certs.go:68] Setting up /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/no-preload-112135 for IP: 192.168.61.167
	I0505 22:27:55.224508   65347 certs.go:194] generating shared ca certs ...
	I0505 22:27:55.224533   65347 certs.go:226] acquiring lock for ca certs: {Name:mk9a8f97724855697074b08194c6247f8f9e5c84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 22:27:55.224717   65347 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key
	I0505 22:27:55.224777   65347 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key
	I0505 22:27:55.224793   65347 certs.go:256] generating profile certs ...
	I0505 22:27:55.224899   65347 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/no-preload-112135/client.key
	I0505 22:27:55.224986   65347 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/no-preload-112135/apiserver.key.5bdf9414
	I0505 22:27:55.225048   65347 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/no-preload-112135/proxy-client.key
	I0505 22:27:55.225199   65347 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798.pem (1338 bytes)
	W0505 22:27:55.225243   65347 certs.go:480] ignoring /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798_empty.pem, impossibly tiny 0 bytes
	I0505 22:27:55.225260   65347 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca-key.pem (1675 bytes)
	I0505 22:27:55.225300   65347 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/ca.pem (1078 bytes)
	I0505 22:27:55.225344   65347 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/cert.pem (1123 bytes)
	I0505 22:27:55.225382   65347 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/certs/key.pem (1675 bytes)
	I0505 22:27:55.225443   65347 certs.go:484] found cert: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem (1708 bytes)
	I0505 22:27:55.226375   65347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0505 22:27:55.257189   65347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0505 22:27:55.328537   65347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0505 22:27:55.377580   65347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0505 22:27:55.423956   65347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/no-preload-112135/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0505 22:27:55.457150   65347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/no-preload-112135/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0505 22:27:55.483347   65347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/no-preload-112135/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0505 22:27:55.510890   65347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/no-preload-112135/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0505 22:27:55.538447   65347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/certs/18798.pem --> /usr/share/ca-certificates/18798.pem (1338 bytes)
	I0505 22:27:55.566813   65347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/ssl/certs/187982.pem --> /usr/share/ca-certificates/187982.pem (1708 bytes)
	I0505 22:27:55.595034   65347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18602-11466/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0505 22:27:55.622950   65347 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0505 22:27:55.644333   65347 ssh_runner.go:195] Run: openssl version
	I0505 22:27:55.651276   65347 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0505 22:27:55.665585   65347 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0505 22:27:55.670747   65347 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  5 20:58 /usr/share/ca-certificates/minikubeCA.pem
	I0505 22:27:55.670804   65347 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0505 22:27:55.677374   65347 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0505 22:27:55.692386   65347 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18798.pem && ln -fs /usr/share/ca-certificates/18798.pem /etc/ssl/certs/18798.pem"
	I0505 22:27:55.707774   65347 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18798.pem
	I0505 22:27:55.713217   65347 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  5 21:11 /usr/share/ca-certificates/18798.pem
	I0505 22:27:55.713280   65347 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18798.pem
	I0505 22:27:55.720274   65347 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18798.pem /etc/ssl/certs/51391683.0"
	I0505 22:27:55.737201   65347 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/187982.pem && ln -fs /usr/share/ca-certificates/187982.pem /etc/ssl/certs/187982.pem"
	I0505 22:27:56.697574   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:27:59.200397   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:27:55.272579   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:55.772734   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:56.272721   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:56.772399   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:57.272498   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:57.772715   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:58.272163   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:58.771829   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0505 22:27:58.771911   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0505 22:27:58.821279   66092 cri.go:89] found id: ""
	I0505 22:27:58.821310   66092 logs.go:276] 0 containers: []
	W0505 22:27:58.821320   66092 logs.go:278] No container was found matching "kube-apiserver"
	I0505 22:27:58.821328   66092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0505 22:27:58.821392   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0505 22:27:58.865661   66092 cri.go:89] found id: ""
	I0505 22:27:58.865690   66092 logs.go:276] 0 containers: []
	W0505 22:27:58.865708   66092 logs.go:278] No container was found matching "etcd"
	I0505 22:27:58.865713   66092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0505 22:27:58.865782   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0505 22:27:58.905008   66092 cri.go:89] found id: ""
	I0505 22:27:58.905044   66092 logs.go:276] 0 containers: []
	W0505 22:27:58.905056   66092 logs.go:278] No container was found matching "coredns"
	I0505 22:27:58.905064   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0505 22:27:58.905144   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0505 22:27:58.947343   66092 cri.go:89] found id: ""
	I0505 22:27:58.947364   66092 logs.go:276] 0 containers: []
	W0505 22:27:58.947371   66092 logs.go:278] No container was found matching "kube-scheduler"
	I0505 22:27:58.947376   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0505 22:27:58.947425   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0505 22:27:58.988592   66092 cri.go:89] found id: ""
	I0505 22:27:58.988620   66092 logs.go:276] 0 containers: []
	W0505 22:27:58.988632   66092 logs.go:278] No container was found matching "kube-proxy"
	I0505 22:27:58.988642   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0505 22:27:58.988703   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0505 22:27:59.034652   66092 cri.go:89] found id: ""
	I0505 22:27:59.034681   66092 logs.go:276] 0 containers: []
	W0505 22:27:59.034690   66092 logs.go:278] No container was found matching "kube-controller-manager"
	I0505 22:27:59.034706   66092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0505 22:27:59.034771   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0505 22:27:59.084822   66092 cri.go:89] found id: ""
	I0505 22:27:59.084849   66092 logs.go:276] 0 containers: []
	W0505 22:27:59.084861   66092 logs.go:278] No container was found matching "kindnet"
	I0505 22:27:59.084869   66092 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0505 22:27:59.084936   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0505 22:27:59.125202   66092 cri.go:89] found id: ""
	I0505 22:27:59.125234   66092 logs.go:276] 0 containers: []
	W0505 22:27:59.125245   66092 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0505 22:27:59.125256   66092 logs.go:123] Gathering logs for kubelet ...
	I0505 22:27:59.125270   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 22:27:59.189293   66092 logs.go:123] Gathering logs for dmesg ...
	I0505 22:27:59.189328   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 22:27:59.206748   66092 logs.go:123] Gathering logs for describe nodes ...
	I0505 22:27:59.206775   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0505 22:27:59.340290   66092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0505 22:27:59.340322   66092 logs.go:123] Gathering logs for CRI-O ...
	I0505 22:27:59.340356   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0505 22:27:59.417048   66092 logs.go:123] Gathering logs for container status ...
	I0505 22:27:59.417082   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 22:27:55.751964   65347 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/187982.pem
	I0505 22:27:55.757460   65347 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  5 21:11 /usr/share/ca-certificates/187982.pem
	I0505 22:27:55.757528   65347 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/187982.pem
	I0505 22:27:55.764027   65347 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/187982.pem /etc/ssl/certs/3ec20f2e.0"
	I0505 22:27:55.779028   65347 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0505 22:27:55.786340   65347 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0505 22:27:55.794801   65347 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0505 22:27:55.801784   65347 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0505 22:27:55.808802   65347 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0505 22:27:55.816796   65347 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0505 22:27:55.823891   65347 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0505 22:27:55.830914   65347 kubeadm.go:391] StartCluster: {Name:no-preload-112135 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:no-preload-112135 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.167 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 22:27:55.831024   65347 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0505 22:27:55.831086   65347 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0505 22:27:55.873026   65347 cri.go:89] found id: ""
	I0505 22:27:55.873107   65347 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0505 22:27:55.884820   65347 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0505 22:27:55.884849   65347 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0505 22:27:55.884855   65347 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0505 22:27:55.884892   65347 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0505 22:27:55.896699   65347 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0505 22:27:55.897753   65347 kubeconfig.go:125] found "no-preload-112135" server: "https://192.168.61.167:8443"
	I0505 22:27:55.899812   65347 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0505 22:27:55.910839   65347 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.167
	I0505 22:27:55.910867   65347 kubeadm.go:1154] stopping kube-system containers ...
	I0505 22:27:55.910879   65347 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0505 22:27:55.910927   65347 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0505 22:27:55.953490   65347 cri.go:89] found id: ""
	I0505 22:27:55.953546   65347 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0505 22:27:55.971688   65347 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0505 22:27:55.983288   65347 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0505 22:27:55.983313   65347 kubeadm.go:156] found existing configuration files:
	
	I0505 22:27:55.983369   65347 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0505 22:27:55.994004   65347 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0505 22:27:55.994080   65347 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0505 22:27:56.005648   65347 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0505 22:27:56.016395   65347 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0505 22:27:56.016449   65347 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0505 22:27:56.027253   65347 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0505 22:27:56.037625   65347 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0505 22:27:56.037669   65347 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0505 22:27:56.048612   65347 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0505 22:27:56.059355   65347 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0505 22:27:56.059415   65347 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0505 22:27:56.070784   65347 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0505 22:27:56.083269   65347 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0505 22:27:56.205996   65347 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0505 22:27:57.439449   65347 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.233412464s)
	I0505 22:27:57.439497   65347 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0505 22:27:57.714029   65347 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0505 22:27:57.790373   65347 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0505 22:27:57.908775   65347 api_server.go:52] waiting for apiserver process to appear ...
	I0505 22:27:57.908868   65347 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:58.409662   65347 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:58.909470   65347 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:27:58.945249   65347 api_server.go:72] duration metric: took 1.036470934s to wait for apiserver process to appear ...
	I0505 22:27:58.945281   65347 api_server.go:88] waiting for apiserver healthz status ...
	I0505 22:27:58.945306   65347 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0505 22:27:58.945981   65347 api_server.go:269] stopped: https://192.168.61.167:8443/healthz: Get "https://192.168.61.167:8443/healthz": dial tcp 192.168.61.167:8443: connect: connection refused
	I0505 22:27:59.445506   65347 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0505 22:28:01.697779   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:28:04.196585   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:28:01.624372   65347 api_server.go:279] https://192.168.61.167:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0505 22:28:01.624401   65347 api_server.go:103] status: https://192.168.61.167:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0505 22:28:01.624415   65347 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0505 22:28:01.657033   65347 api_server.go:279] https://192.168.61.167:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0505 22:28:01.657067   65347 api_server.go:103] status: https://192.168.61.167:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0505 22:28:01.945426   65347 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0505 22:28:01.953583   65347 api_server.go:279] https://192.168.61.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0505 22:28:01.953611   65347 api_server.go:103] status: https://192.168.61.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0505 22:28:02.446426   65347 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0505 22:28:02.452702   65347 api_server.go:279] https://192.168.61.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0505 22:28:02.452752   65347 api_server.go:103] status: https://192.168.61.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0505 22:28:02.945870   65347 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0505 22:28:02.950593   65347 api_server.go:279] https://192.168.61.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0505 22:28:02.950615   65347 api_server.go:103] status: https://192.168.61.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0505 22:28:03.445686   65347 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0505 22:28:03.451055   65347 api_server.go:279] https://192.168.61.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0505 22:28:03.451093   65347 api_server.go:103] status: https://192.168.61.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0505 22:28:03.946443   65347 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0505 22:28:03.951228   65347 api_server.go:279] https://192.168.61.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0505 22:28:03.951258   65347 api_server.go:103] status: https://192.168.61.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0505 22:28:04.445820   65347 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0505 22:28:04.450230   65347 api_server.go:279] https://192.168.61.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0505 22:28:04.450255   65347 api_server.go:103] status: https://192.168.61.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0505 22:28:04.945821   65347 api_server.go:253] Checking apiserver healthz at https://192.168.61.167:8443/healthz ...
	I0505 22:28:04.953916   65347 api_server.go:279] https://192.168.61.167:8443/healthz returned 200:
	ok
	I0505 22:28:04.964327   65347 api_server.go:141] control plane version: v1.30.0
	I0505 22:28:04.964360   65347 api_server.go:131] duration metric: took 6.019070422s to wait for apiserver health ...
	I0505 22:28:04.964371   65347 cni.go:84] Creating CNI manager for ""
	I0505 22:28:04.964380   65347 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0505 22:28:04.966586   65347 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0505 22:28:01.963048   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:28:01.982712   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0505 22:28:01.982790   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0505 22:28:02.033005   66092 cri.go:89] found id: ""
	I0505 22:28:02.033033   66092 logs.go:276] 0 containers: []
	W0505 22:28:02.033044   66092 logs.go:278] No container was found matching "kube-apiserver"
	I0505 22:28:02.033052   66092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0505 22:28:02.033110   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0505 22:28:02.074680   66092 cri.go:89] found id: ""
	I0505 22:28:02.074710   66092 logs.go:276] 0 containers: []
	W0505 22:28:02.074721   66092 logs.go:278] No container was found matching "etcd"
	I0505 22:28:02.074733   66092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0505 22:28:02.074799   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0505 22:28:02.113901   66092 cri.go:89] found id: ""
	I0505 22:28:02.113928   66092 logs.go:276] 0 containers: []
	W0505 22:28:02.113939   66092 logs.go:278] No container was found matching "coredns"
	I0505 22:28:02.113945   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0505 22:28:02.114008   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0505 22:28:02.151794   66092 cri.go:89] found id: ""
	I0505 22:28:02.151824   66092 logs.go:276] 0 containers: []
	W0505 22:28:02.151832   66092 logs.go:278] No container was found matching "kube-scheduler"
	I0505 22:28:02.151838   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0505 22:28:02.151915   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0505 22:28:02.200918   66092 cri.go:89] found id: ""
	I0505 22:28:02.200948   66092 logs.go:276] 0 containers: []
	W0505 22:28:02.200956   66092 logs.go:278] No container was found matching "kube-proxy"
	I0505 22:28:02.200962   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0505 22:28:02.201021   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0505 22:28:02.252257   66092 cri.go:89] found id: ""
	I0505 22:28:02.252298   66092 logs.go:276] 0 containers: []
	W0505 22:28:02.252311   66092 logs.go:278] No container was found matching "kube-controller-manager"
	I0505 22:28:02.252319   66092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0505 22:28:02.252390   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0505 22:28:02.303949   66092 cri.go:89] found id: ""
	I0505 22:28:02.303993   66092 logs.go:276] 0 containers: []
	W0505 22:28:02.304003   66092 logs.go:278] No container was found matching "kindnet"
	I0505 22:28:02.304008   66092 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0505 22:28:02.304064   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0505 22:28:02.350895   66092 cri.go:89] found id: ""
	I0505 22:28:02.350925   66092 logs.go:276] 0 containers: []
	W0505 22:28:02.350933   66092 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0505 22:28:02.350941   66092 logs.go:123] Gathering logs for kubelet ...
	I0505 22:28:02.350952   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 22:28:02.412401   66092 logs.go:123] Gathering logs for dmesg ...
	I0505 22:28:02.412434   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 22:28:02.428756   66092 logs.go:123] Gathering logs for describe nodes ...
	I0505 22:28:02.428787   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0505 22:28:02.517984   66092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0505 22:28:02.518007   66092 logs.go:123] Gathering logs for CRI-O ...
	I0505 22:28:02.518024   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0505 22:28:02.604612   66092 logs.go:123] Gathering logs for container status ...
	I0505 22:28:02.604651   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 22:28:04.968020   65347 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0505 22:28:04.981335   65347 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0505 22:28:05.007156   65347 system_pods.go:43] waiting for kube-system pods to appear ...
	I0505 22:28:05.019306   65347 system_pods.go:59] 8 kube-system pods found
	I0505 22:28:05.019351   65347 system_pods.go:61] "coredns-7db6d8ff4d-kdxf9" [ac609b5d-918e-4ea8-b8e9-f56683f89b6a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0505 22:28:05.019364   65347 system_pods.go:61] "etcd-no-preload-112135" [920d8e78-a60b-42e6-bd2e-25a9d0d91978] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0505 22:28:05.019376   65347 system_pods.go:61] "kube-apiserver-no-preload-112135" [cc25f6f6-cfa5-4074-bacb-60a3b8907ff3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0505 22:28:05.019385   65347 system_pods.go:61] "kube-controller-manager-no-preload-112135" [cd5014ea-4df4-4b15-95c3-5a78d38b103b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0505 22:28:05.019401   65347 system_pods.go:61] "kube-proxy-2265z" [54c41f2c-d6de-46ab-9d00-a7b138632a97] Running
	I0505 22:28:05.019412   65347 system_pods.go:61] "kube-scheduler-no-preload-112135" [4d580f00-aef8-493c-b850-47a5ac2cab05] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0505 22:28:05.019425   65347 system_pods.go:61] "metrics-server-569cc877fc-hhggh" [17acab90-206d-4412-ab5e-844893c0a554] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0505 22:28:05.019435   65347 system_pods.go:61] "storage-provisioner" [b7bc6eb3-be2f-4576-83e0-337bf0337a2a] Running
	I0505 22:28:05.019449   65347 system_pods.go:74] duration metric: took 12.270441ms to wait for pod list to return data ...
	I0505 22:28:05.019462   65347 node_conditions.go:102] verifying NodePressure condition ...
	I0505 22:28:05.024846   65347 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0505 22:28:05.024876   65347 node_conditions.go:123] node cpu capacity is 2
	I0505 22:28:05.024889   65347 node_conditions.go:105] duration metric: took 5.416745ms to run NodePressure ...
	I0505 22:28:05.024910   65347 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0505 22:28:05.304390   65347 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0505 22:28:05.316266   65347 kubeadm.go:733] kubelet initialised
	I0505 22:28:05.316305   65347 kubeadm.go:734] duration metric: took 11.885271ms waiting for restarted kubelet to initialise ...
	I0505 22:28:05.316314   65347 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0505 22:28:05.329515   65347 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-kdxf9" in "kube-system" namespace to be "Ready" ...
	I0505 22:28:05.340470   65347 pod_ready.go:97] node "no-preload-112135" hosting pod "coredns-7db6d8ff4d-kdxf9" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-112135" has status "Ready":"False"
	I0505 22:28:05.340501   65347 pod_ready.go:81] duration metric: took 10.955934ms for pod "coredns-7db6d8ff4d-kdxf9" in "kube-system" namespace to be "Ready" ...
	E0505 22:28:05.340513   65347 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-112135" hosting pod "coredns-7db6d8ff4d-kdxf9" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-112135" has status "Ready":"False"
	I0505 22:28:05.340524   65347 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-112135" in "kube-system" namespace to be "Ready" ...
	I0505 22:28:05.345267   65347 pod_ready.go:97] node "no-preload-112135" hosting pod "etcd-no-preload-112135" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-112135" has status "Ready":"False"
	I0505 22:28:05.345294   65347 pod_ready.go:81] duration metric: took 4.761047ms for pod "etcd-no-preload-112135" in "kube-system" namespace to be "Ready" ...
	E0505 22:28:05.345305   65347 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-112135" hosting pod "etcd-no-preload-112135" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-112135" has status "Ready":"False"
	I0505 22:28:05.345313   65347 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-112135" in "kube-system" namespace to be "Ready" ...
	I0505 22:28:05.350379   65347 pod_ready.go:97] node "no-preload-112135" hosting pod "kube-apiserver-no-preload-112135" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-112135" has status "Ready":"False"
	I0505 22:28:05.350398   65347 pod_ready.go:81] duration metric: took 5.077926ms for pod "kube-apiserver-no-preload-112135" in "kube-system" namespace to be "Ready" ...
	E0505 22:28:05.350406   65347 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-112135" hosting pod "kube-apiserver-no-preload-112135" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-112135" has status "Ready":"False"
	I0505 22:28:05.350414   65347 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-112135" in "kube-system" namespace to be "Ready" ...
	I0505 22:28:06.197753   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:28:08.698356   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:28:05.178708   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:28:05.195327   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0505 22:28:05.195447   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0505 22:28:05.240739   66092 cri.go:89] found id: ""
	I0505 22:28:05.240771   66092 logs.go:276] 0 containers: []
	W0505 22:28:05.240783   66092 logs.go:278] No container was found matching "kube-apiserver"
	I0505 22:28:05.240790   66092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0505 22:28:05.240847   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0505 22:28:05.284032   66092 cri.go:89] found id: ""
	I0505 22:28:05.284059   66092 logs.go:276] 0 containers: []
	W0505 22:28:05.284068   66092 logs.go:278] No container was found matching "etcd"
	I0505 22:28:05.284076   66092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0505 22:28:05.284135   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0505 22:28:05.336826   66092 cri.go:89] found id: ""
	I0505 22:28:05.336848   66092 logs.go:276] 0 containers: []
	W0505 22:28:05.336859   66092 logs.go:278] No container was found matching "coredns"
	I0505 22:28:05.336866   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0505 22:28:05.336924   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0505 22:28:05.386165   66092 cri.go:89] found id: ""
	I0505 22:28:05.386195   66092 logs.go:276] 0 containers: []
	W0505 22:28:05.386207   66092 logs.go:278] No container was found matching "kube-scheduler"
	I0505 22:28:05.386217   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0505 22:28:05.386277   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0505 22:28:05.430101   66092 cri.go:89] found id: ""
	I0505 22:28:05.430145   66092 logs.go:276] 0 containers: []
	W0505 22:28:05.430154   66092 logs.go:278] No container was found matching "kube-proxy"
	I0505 22:28:05.430160   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0505 22:28:05.430216   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0505 22:28:05.474044   66092 cri.go:89] found id: ""
	I0505 22:28:05.474072   66092 logs.go:276] 0 containers: []
	W0505 22:28:05.474084   66092 logs.go:278] No container was found matching "kube-controller-manager"
	I0505 22:28:05.474092   66092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0505 22:28:05.474148   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0505 22:28:05.517962   66092 cri.go:89] found id: ""
	I0505 22:28:05.518003   66092 logs.go:276] 0 containers: []
	W0505 22:28:05.518022   66092 logs.go:278] No container was found matching "kindnet"
	I0505 22:28:05.518030   66092 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0505 22:28:05.518099   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0505 22:28:05.561337   66092 cri.go:89] found id: ""
	I0505 22:28:05.561367   66092 logs.go:276] 0 containers: []
	W0505 22:28:05.561380   66092 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0505 22:28:05.561391   66092 logs.go:123] Gathering logs for kubelet ...
	I0505 22:28:05.561406   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 22:28:05.612329   66092 logs.go:123] Gathering logs for dmesg ...
	I0505 22:28:05.612368   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 22:28:05.629195   66092 logs.go:123] Gathering logs for describe nodes ...
	I0505 22:28:05.629221   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0505 22:28:05.708882   66092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0505 22:28:05.708917   66092 logs.go:123] Gathering logs for CRI-O ...
	I0505 22:28:05.708932   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0505 22:28:05.789032   66092 logs.go:123] Gathering logs for container status ...
	I0505 22:28:05.789070   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 22:28:08.340865   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:28:08.357282   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0505 22:28:08.357365   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0505 22:28:08.405476   66092 cri.go:89] found id: ""
	I0505 22:28:08.405508   66092 logs.go:276] 0 containers: []
	W0505 22:28:08.405516   66092 logs.go:278] No container was found matching "kube-apiserver"
	I0505 22:28:08.405522   66092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0505 22:28:08.405586   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0505 22:28:08.456317   66092 cri.go:89] found id: ""
	I0505 22:28:08.456348   66092 logs.go:276] 0 containers: []
	W0505 22:28:08.456359   66092 logs.go:278] No container was found matching "etcd"
	I0505 22:28:08.456366   66092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0505 22:28:08.456440   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0505 22:28:08.498561   66092 cri.go:89] found id: ""
	I0505 22:28:08.498590   66092 logs.go:276] 0 containers: []
	W0505 22:28:08.498602   66092 logs.go:278] No container was found matching "coredns"
	I0505 22:28:08.498608   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0505 22:28:08.498668   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0505 22:28:08.537620   66092 cri.go:89] found id: ""
	I0505 22:28:08.537654   66092 logs.go:276] 0 containers: []
	W0505 22:28:08.537664   66092 logs.go:278] No container was found matching "kube-scheduler"
	I0505 22:28:08.537671   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0505 22:28:08.537729   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0505 22:28:08.576644   66092 cri.go:89] found id: ""
	I0505 22:28:08.576677   66092 logs.go:276] 0 containers: []
	W0505 22:28:08.576688   66092 logs.go:278] No container was found matching "kube-proxy"
	I0505 22:28:08.576696   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0505 22:28:08.576769   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0505 22:28:08.616957   66092 cri.go:89] found id: ""
	I0505 22:28:08.616987   66092 logs.go:276] 0 containers: []
	W0505 22:28:08.616998   66092 logs.go:278] No container was found matching "kube-controller-manager"
	I0505 22:28:08.617006   66092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0505 22:28:08.617069   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0505 22:28:08.662277   66092 cri.go:89] found id: ""
	I0505 22:28:08.662304   66092 logs.go:276] 0 containers: []
	W0505 22:28:08.662312   66092 logs.go:278] No container was found matching "kindnet"
	I0505 22:28:08.662317   66092 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0505 22:28:08.662367   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0505 22:28:08.705792   66092 cri.go:89] found id: ""
	I0505 22:28:08.705816   66092 logs.go:276] 0 containers: []
	W0505 22:28:08.705826   66092 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0505 22:28:08.705836   66092 logs.go:123] Gathering logs for kubelet ...
	I0505 22:28:08.705850   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 22:28:08.761112   66092 logs.go:123] Gathering logs for dmesg ...
	I0505 22:28:08.761152   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 22:28:08.780323   66092 logs.go:123] Gathering logs for describe nodes ...
	I0505 22:28:08.780357   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0505 22:28:08.862929   66092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0505 22:28:08.862954   66092 logs.go:123] Gathering logs for CRI-O ...
	I0505 22:28:08.862972   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0505 22:28:08.946737   66092 logs.go:123] Gathering logs for container status ...
	I0505 22:28:08.946774   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 22:28:07.357783   65347 pod_ready.go:102] pod "kube-controller-manager-no-preload-112135" in "kube-system" namespace has status "Ready":"False"
	I0505 22:28:09.358270   65347 pod_ready.go:102] pod "kube-controller-manager-no-preload-112135" in "kube-system" namespace has status "Ready":"False"
	I0505 22:28:11.197026   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:28:13.197758   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:28:11.504121   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:28:11.520541   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0505 22:28:11.520610   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0505 22:28:11.561211   66092 cri.go:89] found id: ""
	I0505 22:28:11.561252   66092 logs.go:276] 0 containers: []
	W0505 22:28:11.561264   66092 logs.go:278] No container was found matching "kube-apiserver"
	I0505 22:28:11.561272   66092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0505 22:28:11.561335   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0505 22:28:11.599654   66092 cri.go:89] found id: ""
	I0505 22:28:11.599706   66092 logs.go:276] 0 containers: []
	W0505 22:28:11.599727   66092 logs.go:278] No container was found matching "etcd"
	I0505 22:28:11.599738   66092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0505 22:28:11.599801   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0505 22:28:11.641350   66092 cri.go:89] found id: ""
	I0505 22:28:11.641377   66092 logs.go:276] 0 containers: []
	W0505 22:28:11.641387   66092 logs.go:278] No container was found matching "coredns"
	I0505 22:28:11.641393   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0505 22:28:11.641458   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0505 22:28:11.681455   66092 cri.go:89] found id: ""
	I0505 22:28:11.681482   66092 logs.go:276] 0 containers: []
	W0505 22:28:11.681491   66092 logs.go:278] No container was found matching "kube-scheduler"
	I0505 22:28:11.681496   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0505 22:28:11.681545   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0505 22:28:11.722505   66092 cri.go:89] found id: ""
	I0505 22:28:11.722532   66092 logs.go:276] 0 containers: []
	W0505 22:28:11.722542   66092 logs.go:278] No container was found matching "kube-proxy"
	I0505 22:28:11.722549   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0505 22:28:11.722628   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0505 22:28:11.774381   66092 cri.go:89] found id: ""
	I0505 22:28:11.774417   66092 logs.go:276] 0 containers: []
	W0505 22:28:11.774429   66092 logs.go:278] No container was found matching "kube-controller-manager"
	I0505 22:28:11.774439   66092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0505 22:28:11.774503   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0505 22:28:11.813848   66092 cri.go:89] found id: ""
	I0505 22:28:11.813872   66092 logs.go:276] 0 containers: []
	W0505 22:28:11.813881   66092 logs.go:278] No container was found matching "kindnet"
	I0505 22:28:11.813889   66092 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0505 22:28:11.813971   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0505 22:28:11.854325   66092 cri.go:89] found id: ""
	I0505 22:28:11.854355   66092 logs.go:276] 0 containers: []
	W0505 22:28:11.854364   66092 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0505 22:28:11.854373   66092 logs.go:123] Gathering logs for kubelet ...
	I0505 22:28:11.854386   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 22:28:11.906804   66092 logs.go:123] Gathering logs for dmesg ...
	I0505 22:28:11.906841   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 22:28:11.923930   66092 logs.go:123] Gathering logs for describe nodes ...
	I0505 22:28:11.923958   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0505 22:28:12.001550   66092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0505 22:28:12.001574   66092 logs.go:123] Gathering logs for CRI-O ...
	I0505 22:28:12.001590   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0505 22:28:12.091081   66092 logs.go:123] Gathering logs for container status ...
	I0505 22:28:12.091117   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 22:28:14.649251   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:28:14.663919   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0505 22:28:14.664004   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0505 22:28:14.706583   66092 cri.go:89] found id: ""
	I0505 22:28:14.706606   66092 logs.go:276] 0 containers: []
	W0505 22:28:14.706618   66092 logs.go:278] No container was found matching "kube-apiserver"
	I0505 22:28:14.706623   66092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0505 22:28:14.706668   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0505 22:28:14.748612   66092 cri.go:89] found id: ""
	I0505 22:28:14.748637   66092 logs.go:276] 0 containers: []
	W0505 22:28:14.748645   66092 logs.go:278] No container was found matching "etcd"
	I0505 22:28:14.748650   66092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0505 22:28:14.748703   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0505 22:28:14.789831   66092 cri.go:89] found id: ""
	I0505 22:28:14.789861   66092 logs.go:276] 0 containers: []
	W0505 22:28:14.789872   66092 logs.go:278] No container was found matching "coredns"
	I0505 22:28:14.789886   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0505 22:28:14.789953   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0505 22:28:14.839559   66092 cri.go:89] found id: ""
	I0505 22:28:14.839589   66092 logs.go:276] 0 containers: []
	W0505 22:28:14.839597   66092 logs.go:278] No container was found matching "kube-scheduler"
	I0505 22:28:14.839611   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0505 22:28:14.839669   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0505 22:28:14.880932   66092 cri.go:89] found id: ""
	I0505 22:28:14.880956   66092 logs.go:276] 0 containers: []
	W0505 22:28:14.880963   66092 logs.go:278] No container was found matching "kube-proxy"
	I0505 22:28:14.880968   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0505 22:28:14.881039   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0505 22:28:14.922567   66092 cri.go:89] found id: ""
	I0505 22:28:14.922597   66092 logs.go:276] 0 containers: []
	W0505 22:28:14.922608   66092 logs.go:278] No container was found matching "kube-controller-manager"
	I0505 22:28:14.922636   66092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0505 22:28:14.922701   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0505 22:28:14.970478   66092 cri.go:89] found id: ""
	I0505 22:28:14.970507   66092 logs.go:276] 0 containers: []
	W0505 22:28:14.970517   66092 logs.go:278] No container was found matching "kindnet"
	I0505 22:28:14.970526   66092 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0505 22:28:14.970589   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0505 22:28:15.010271   66092 cri.go:89] found id: ""
	I0505 22:28:15.010301   66092 logs.go:276] 0 containers: []
	W0505 22:28:15.010311   66092 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0505 22:28:15.010322   66092 logs.go:123] Gathering logs for container status ...
	I0505 22:28:15.010337   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 22:28:15.069745   66092 logs.go:123] Gathering logs for kubelet ...
	I0505 22:28:15.069776   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 22:28:11.361151   65347 pod_ready.go:102] pod "kube-controller-manager-no-preload-112135" in "kube-system" namespace has status "Ready":"False"
	I0505 22:28:13.857094   65347 pod_ready.go:102] pod "kube-controller-manager-no-preload-112135" in "kube-system" namespace has status "Ready":"False"
	I0505 22:28:15.199123   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:28:17.697779   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:28:19.700011   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:28:15.144860   66092 logs.go:123] Gathering logs for dmesg ...
	I0505 22:28:15.144903   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 22:28:15.164908   66092 logs.go:123] Gathering logs for describe nodes ...
	I0505 22:28:15.164943   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0505 22:28:15.248031   66092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0505 22:28:15.248052   66092 logs.go:123] Gathering logs for CRI-O ...
	I0505 22:28:15.248066   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0505 22:28:17.830622   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:28:17.846466   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0505 22:28:17.846547   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0505 22:28:17.889982   66092 cri.go:89] found id: ""
	I0505 22:28:17.890012   66092 logs.go:276] 0 containers: []
	W0505 22:28:17.890024   66092 logs.go:278] No container was found matching "kube-apiserver"
	I0505 22:28:17.890032   66092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0505 22:28:17.890086   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0505 22:28:17.930640   66092 cri.go:89] found id: ""
	I0505 22:28:17.930681   66092 logs.go:276] 0 containers: []
	W0505 22:28:17.930690   66092 logs.go:278] No container was found matching "etcd"
	I0505 22:28:17.930695   66092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0505 22:28:17.930749   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0505 22:28:17.974002   66092 cri.go:89] found id: ""
	I0505 22:28:17.974032   66092 logs.go:276] 0 containers: []
	W0505 22:28:17.974040   66092 logs.go:278] No container was found matching "coredns"
	I0505 22:28:17.974046   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0505 22:28:17.974113   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0505 22:28:18.013858   66092 cri.go:89] found id: ""
	I0505 22:28:18.013882   66092 logs.go:276] 0 containers: []
	W0505 22:28:18.013889   66092 logs.go:278] No container was found matching "kube-scheduler"
	I0505 22:28:18.013895   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0505 22:28:18.013952   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0505 22:28:18.054464   66092 cri.go:89] found id: ""
	I0505 22:28:18.054487   66092 logs.go:276] 0 containers: []
	W0505 22:28:18.054495   66092 logs.go:278] No container was found matching "kube-proxy"
	I0505 22:28:18.054500   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0505 22:28:18.054562   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0505 22:28:18.094141   66092 cri.go:89] found id: ""
	I0505 22:28:18.094173   66092 logs.go:276] 0 containers: []
	W0505 22:28:18.094184   66092 logs.go:278] No container was found matching "kube-controller-manager"
	I0505 22:28:18.094191   66092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0505 22:28:18.094256   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0505 22:28:18.134898   66092 cri.go:89] found id: ""
	I0505 22:28:18.134931   66092 logs.go:276] 0 containers: []
	W0505 22:28:18.134943   66092 logs.go:278] No container was found matching "kindnet"
	I0505 22:28:18.134951   66092 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0505 22:28:18.135016   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0505 22:28:18.174522   66092 cri.go:89] found id: ""
	I0505 22:28:18.174550   66092 logs.go:276] 0 containers: []
	W0505 22:28:18.174562   66092 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0505 22:28:18.174573   66092 logs.go:123] Gathering logs for kubelet ...
	I0505 22:28:18.174601   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 22:28:18.230139   66092 logs.go:123] Gathering logs for dmesg ...
	I0505 22:28:18.230186   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 22:28:18.245498   66092 logs.go:123] Gathering logs for describe nodes ...
	I0505 22:28:18.245533   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0505 22:28:18.329718   66092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0505 22:28:18.329742   66092 logs.go:123] Gathering logs for CRI-O ...
	I0505 22:28:18.329762   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0505 22:28:18.406938   66092 logs.go:123] Gathering logs for container status ...
	I0505 22:28:18.406971   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 22:28:15.858773   65347 pod_ready.go:102] pod "kube-controller-manager-no-preload-112135" in "kube-system" namespace has status "Ready":"False"
	I0505 22:28:16.360703   65347 pod_ready.go:92] pod "kube-controller-manager-no-preload-112135" in "kube-system" namespace has status "Ready":"True"
	I0505 22:28:16.360728   65347 pod_ready.go:81] duration metric: took 11.010305241s for pod "kube-controller-manager-no-preload-112135" in "kube-system" namespace to be "Ready" ...
	I0505 22:28:16.360742   65347 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2265z" in "kube-system" namespace to be "Ready" ...
	I0505 22:28:16.367426   65347 pod_ready.go:92] pod "kube-proxy-2265z" in "kube-system" namespace has status "Ready":"True"
	I0505 22:28:16.367449   65347 pod_ready.go:81] duration metric: took 6.697487ms for pod "kube-proxy-2265z" in "kube-system" namespace to be "Ready" ...
	I0505 22:28:16.367458   65347 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-112135" in "kube-system" namespace to be "Ready" ...
	I0505 22:28:16.375230   65347 pod_ready.go:92] pod "kube-scheduler-no-preload-112135" in "kube-system" namespace has status "Ready":"True"
	I0505 22:28:16.375257   65347 pod_ready.go:81] duration metric: took 7.79244ms for pod "kube-scheduler-no-preload-112135" in "kube-system" namespace to be "Ready" ...
	I0505 22:28:16.375266   65347 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace to be "Ready" ...
	I0505 22:28:18.382545   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:28:20.383112   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:28:22.198621   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:28:24.198734   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:28:20.952959   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:28:20.970615   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0505 22:28:20.970674   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0505 22:28:21.025253   66092 cri.go:89] found id: ""
	I0505 22:28:21.025288   66092 logs.go:276] 0 containers: []
	W0505 22:28:21.025301   66092 logs.go:278] No container was found matching "kube-apiserver"
	I0505 22:28:21.025312   66092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0505 22:28:21.025377   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0505 22:28:21.080681   66092 cri.go:89] found id: ""
	I0505 22:28:21.080709   66092 logs.go:276] 0 containers: []
	W0505 22:28:21.080719   66092 logs.go:278] No container was found matching "etcd"
	I0505 22:28:21.080724   66092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0505 22:28:21.080788   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0505 22:28:21.129979   66092 cri.go:89] found id: ""
	I0505 22:28:21.130010   66092 logs.go:276] 0 containers: []
	W0505 22:28:21.130021   66092 logs.go:278] No container was found matching "coredns"
	I0505 22:28:21.130028   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0505 22:28:21.130089   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0505 22:28:21.170099   66092 cri.go:89] found id: ""
	I0505 22:28:21.170125   66092 logs.go:276] 0 containers: []
	W0505 22:28:21.170134   66092 logs.go:278] No container was found matching "kube-scheduler"
	I0505 22:28:21.170139   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0505 22:28:21.170192   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0505 22:28:21.211801   66092 cri.go:89] found id: ""
	I0505 22:28:21.211830   66092 logs.go:276] 0 containers: []
	W0505 22:28:21.211840   66092 logs.go:278] No container was found matching "kube-proxy"
	I0505 22:28:21.211848   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0505 22:28:21.211905   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0505 22:28:21.250550   66092 cri.go:89] found id: ""
	I0505 22:28:21.250588   66092 logs.go:276] 0 containers: []
	W0505 22:28:21.250597   66092 logs.go:278] No container was found matching "kube-controller-manager"
	I0505 22:28:21.250603   66092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0505 22:28:21.250654   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0505 22:28:21.290353   66092 cri.go:89] found id: ""
	I0505 22:28:21.290386   66092 logs.go:276] 0 containers: []
	W0505 22:28:21.290396   66092 logs.go:278] No container was found matching "kindnet"
	I0505 22:28:21.290402   66092 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0505 22:28:21.290461   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0505 22:28:21.329535   66092 cri.go:89] found id: ""
	I0505 22:28:21.329564   66092 logs.go:276] 0 containers: []
	W0505 22:28:21.329573   66092 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0505 22:28:21.329582   66092 logs.go:123] Gathering logs for kubelet ...
	I0505 22:28:21.329596   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 22:28:21.381047   66092 logs.go:123] Gathering logs for dmesg ...
	I0505 22:28:21.381088   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 22:28:21.397953   66092 logs.go:123] Gathering logs for describe nodes ...
	I0505 22:28:21.397989   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0505 22:28:21.478630   66092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0505 22:28:21.478651   66092 logs.go:123] Gathering logs for CRI-O ...
	I0505 22:28:21.478667   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0505 22:28:21.561621   66092 logs.go:123] Gathering logs for container status ...
	I0505 22:28:21.561655   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 22:28:24.104193   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:28:24.119236   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0505 22:28:24.119308   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0505 22:28:24.162277   66092 cri.go:89] found id: ""
	I0505 22:28:24.162312   66092 logs.go:276] 0 containers: []
	W0505 22:28:24.162320   66092 logs.go:278] No container was found matching "kube-apiserver"
	I0505 22:28:24.162325   66092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0505 22:28:24.162385   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0505 22:28:24.208541   66092 cri.go:89] found id: ""
	I0505 22:28:24.208575   66092 logs.go:276] 0 containers: []
	W0505 22:28:24.208592   66092 logs.go:278] No container was found matching "etcd"
	I0505 22:28:24.208599   66092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0505 22:28:24.208665   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0505 22:28:24.245645   66092 cri.go:89] found id: ""
	I0505 22:28:24.245674   66092 logs.go:276] 0 containers: []
	W0505 22:28:24.245685   66092 logs.go:278] No container was found matching "coredns"
	I0505 22:28:24.245691   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0505 22:28:24.245750   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0505 22:28:24.282587   66092 cri.go:89] found id: ""
	I0505 22:28:24.282621   66092 logs.go:276] 0 containers: []
	W0505 22:28:24.282633   66092 logs.go:278] No container was found matching "kube-scheduler"
	I0505 22:28:24.282641   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0505 22:28:24.282706   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0505 22:28:24.321794   66092 cri.go:89] found id: ""
	I0505 22:28:24.321837   66092 logs.go:276] 0 containers: []
	W0505 22:28:24.321849   66092 logs.go:278] No container was found matching "kube-proxy"
	I0505 22:28:24.321857   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0505 22:28:24.321927   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0505 22:28:24.358686   66092 cri.go:89] found id: ""
	I0505 22:28:24.358712   66092 logs.go:276] 0 containers: []
	W0505 22:28:24.358721   66092 logs.go:278] No container was found matching "kube-controller-manager"
	I0505 22:28:24.358726   66092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0505 22:28:24.358774   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0505 22:28:24.394806   66092 cri.go:89] found id: ""
	I0505 22:28:24.394827   66092 logs.go:276] 0 containers: []
	W0505 22:28:24.394834   66092 logs.go:278] No container was found matching "kindnet"
	I0505 22:28:24.394842   66092 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0505 22:28:24.394897   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0505 22:28:24.431023   66092 cri.go:89] found id: ""
	I0505 22:28:24.431044   66092 logs.go:276] 0 containers: []
	W0505 22:28:24.431052   66092 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0505 22:28:24.431060   66092 logs.go:123] Gathering logs for kubelet ...
	I0505 22:28:24.431071   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 22:28:24.483759   66092 logs.go:123] Gathering logs for dmesg ...
	I0505 22:28:24.483797   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 22:28:24.499387   66092 logs.go:123] Gathering logs for describe nodes ...
	I0505 22:28:24.499415   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0505 22:28:24.571753   66092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0505 22:28:24.571778   66092 logs.go:123] Gathering logs for CRI-O ...
	I0505 22:28:24.571794   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0505 22:28:24.655344   66092 logs.go:123] Gathering logs for container status ...
	I0505 22:28:24.655377   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 22:28:22.883151   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:28:25.382555   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:28:26.199830   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:28:28.700605   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:28:27.198409   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:28:27.211325   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0505 22:28:27.211380   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0505 22:28:27.255739   66092 cri.go:89] found id: ""
	I0505 22:28:27.255767   66092 logs.go:276] 0 containers: []
	W0505 22:28:27.255776   66092 logs.go:278] No container was found matching "kube-apiserver"
	I0505 22:28:27.255782   66092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0505 22:28:27.255846   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0505 22:28:27.297213   66092 cri.go:89] found id: ""
	I0505 22:28:27.297245   66092 logs.go:276] 0 containers: []
	W0505 22:28:27.297257   66092 logs.go:278] No container was found matching "etcd"
	I0505 22:28:27.297264   66092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0505 22:28:27.297325   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0505 22:28:27.337559   66092 cri.go:89] found id: ""
	I0505 22:28:27.337581   66092 logs.go:276] 0 containers: []
	W0505 22:28:27.337588   66092 logs.go:278] No container was found matching "coredns"
	I0505 22:28:27.337593   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0505 22:28:27.337638   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0505 22:28:27.375061   66092 cri.go:89] found id: ""
	I0505 22:28:27.375085   66092 logs.go:276] 0 containers: []
	W0505 22:28:27.375093   66092 logs.go:278] No container was found matching "kube-scheduler"
	I0505 22:28:27.375099   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0505 22:28:27.375159   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0505 22:28:27.413428   66092 cri.go:89] found id: ""
	I0505 22:28:27.413456   66092 logs.go:276] 0 containers: []
	W0505 22:28:27.413465   66092 logs.go:278] No container was found matching "kube-proxy"
	I0505 22:28:27.413471   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0505 22:28:27.413543   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0505 22:28:27.452273   66092 cri.go:89] found id: ""
	I0505 22:28:27.452307   66092 logs.go:276] 0 containers: []
	W0505 22:28:27.452317   66092 logs.go:278] No container was found matching "kube-controller-manager"
	I0505 22:28:27.452324   66092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0505 22:28:27.452389   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0505 22:28:27.491928   66092 cri.go:89] found id: ""
	I0505 22:28:27.491955   66092 logs.go:276] 0 containers: []
	W0505 22:28:27.491965   66092 logs.go:278] No container was found matching "kindnet"
	I0505 22:28:27.491972   66092 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0505 22:28:27.492032   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0505 22:28:27.532941   66092 cri.go:89] found id: ""
	I0505 22:28:27.532969   66092 logs.go:276] 0 containers: []
	W0505 22:28:27.532976   66092 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0505 22:28:27.532986   66092 logs.go:123] Gathering logs for CRI-O ...
	I0505 22:28:27.533000   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0505 22:28:27.617564   66092 logs.go:123] Gathering logs for container status ...
	I0505 22:28:27.617610   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 22:28:27.663187   66092 logs.go:123] Gathering logs for kubelet ...
	I0505 22:28:27.663254   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 22:28:27.714100   66092 logs.go:123] Gathering logs for dmesg ...
	I0505 22:28:27.714127   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 22:28:27.731876   66092 logs.go:123] Gathering logs for describe nodes ...
	I0505 22:28:27.731910   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0505 22:28:27.813053   66092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0505 22:28:27.881717   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:28:29.882540   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:28:31.198015   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:28:33.698403   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:28:30.313434   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:28:30.328301   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0505 22:28:30.328362   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0505 22:28:30.366125   66092 cri.go:89] found id: ""
	I0505 22:28:30.366157   66092 logs.go:276] 0 containers: []
	W0505 22:28:30.366166   66092 logs.go:278] No container was found matching "kube-apiserver"
	I0505 22:28:30.366172   66092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0505 22:28:30.366222   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0505 22:28:30.408116   66092 cri.go:89] found id: ""
	I0505 22:28:30.408140   66092 logs.go:276] 0 containers: []
	W0505 22:28:30.408148   66092 logs.go:278] No container was found matching "etcd"
	I0505 22:28:30.408153   66092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0505 22:28:30.408207   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0505 22:28:30.448016   66092 cri.go:89] found id: ""
	I0505 22:28:30.448047   66092 logs.go:276] 0 containers: []
	W0505 22:28:30.448058   66092 logs.go:278] No container was found matching "coredns"
	I0505 22:28:30.448066   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0505 22:28:30.448128   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0505 22:28:30.484284   66092 cri.go:89] found id: ""
	I0505 22:28:30.484310   66092 logs.go:276] 0 containers: []
	W0505 22:28:30.484320   66092 logs.go:278] No container was found matching "kube-scheduler"
	I0505 22:28:30.484326   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0505 22:28:30.484415   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0505 22:28:30.521639   66092 cri.go:89] found id: ""
	I0505 22:28:30.521661   66092 logs.go:276] 0 containers: []
	W0505 22:28:30.521669   66092 logs.go:278] No container was found matching "kube-proxy"
	I0505 22:28:30.521675   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0505 22:28:30.521735   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0505 22:28:30.556240   66092 cri.go:89] found id: ""
	I0505 22:28:30.556267   66092 logs.go:276] 0 containers: []
	W0505 22:28:30.556277   66092 logs.go:278] No container was found matching "kube-controller-manager"
	I0505 22:28:30.556284   66092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0505 22:28:30.556342   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0505 22:28:30.600827   66092 cri.go:89] found id: ""
	I0505 22:28:30.600855   66092 logs.go:276] 0 containers: []
	W0505 22:28:30.600865   66092 logs.go:278] No container was found matching "kindnet"
	I0505 22:28:30.600872   66092 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0505 22:28:30.600933   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0505 22:28:30.640581   66092 cri.go:89] found id: ""
	I0505 22:28:30.640612   66092 logs.go:276] 0 containers: []
	W0505 22:28:30.640620   66092 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0505 22:28:30.640629   66092 logs.go:123] Gathering logs for CRI-O ...
	I0505 22:28:30.640645   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0505 22:28:30.723423   66092 logs.go:123] Gathering logs for container status ...
	I0505 22:28:30.723455   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 22:28:30.769263   66092 logs.go:123] Gathering logs for kubelet ...
	I0505 22:28:30.769320   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 22:28:30.821009   66092 logs.go:123] Gathering logs for dmesg ...
	I0505 22:28:30.821039   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 22:28:30.840342   66092 logs.go:123] Gathering logs for describe nodes ...
	I0505 22:28:30.840375   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0505 22:28:30.919944   66092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0505 22:28:33.421077   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:28:33.435242   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0505 22:28:33.435307   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0505 22:28:33.478499   66092 cri.go:89] found id: ""
	I0505 22:28:33.478522   66092 logs.go:276] 0 containers: []
	W0505 22:28:33.478530   66092 logs.go:278] No container was found matching "kube-apiserver"
	I0505 22:28:33.478536   66092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0505 22:28:33.478586   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0505 22:28:33.517253   66092 cri.go:89] found id: ""
	I0505 22:28:33.517283   66092 logs.go:276] 0 containers: []
	W0505 22:28:33.517292   66092 logs.go:278] No container was found matching "etcd"
	I0505 22:28:33.517297   66092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0505 22:28:33.517367   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0505 22:28:33.556035   66092 cri.go:89] found id: ""
	I0505 22:28:33.556061   66092 logs.go:276] 0 containers: []
	W0505 22:28:33.556096   66092 logs.go:278] No container was found matching "coredns"
	I0505 22:28:33.556115   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0505 22:28:33.556181   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0505 22:28:33.595011   66092 cri.go:89] found id: ""
	I0505 22:28:33.595035   66092 logs.go:276] 0 containers: []
	W0505 22:28:33.595043   66092 logs.go:278] No container was found matching "kube-scheduler"
	I0505 22:28:33.595048   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0505 22:28:33.595108   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0505 22:28:33.638700   66092 cri.go:89] found id: ""
	I0505 22:28:33.638724   66092 logs.go:276] 0 containers: []
	W0505 22:28:33.638732   66092 logs.go:278] No container was found matching "kube-proxy"
	I0505 22:28:33.638737   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0505 22:28:33.638812   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0505 22:28:33.681602   66092 cri.go:89] found id: ""
	I0505 22:28:33.681630   66092 logs.go:276] 0 containers: []
	W0505 22:28:33.681637   66092 logs.go:278] No container was found matching "kube-controller-manager"
	I0505 22:28:33.681643   66092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0505 22:28:33.681689   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0505 22:28:33.721728   66092 cri.go:89] found id: ""
	I0505 22:28:33.721754   66092 logs.go:276] 0 containers: []
	W0505 22:28:33.721762   66092 logs.go:278] No container was found matching "kindnet"
	I0505 22:28:33.721768   66092 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0505 22:28:33.721825   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0505 22:28:33.762525   66092 cri.go:89] found id: ""
	I0505 22:28:33.762550   66092 logs.go:276] 0 containers: []
	W0505 22:28:33.762558   66092 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0505 22:28:33.762566   66092 logs.go:123] Gathering logs for kubelet ...
	I0505 22:28:33.762578   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 22:28:33.816534   66092 logs.go:123] Gathering logs for dmesg ...
	I0505 22:28:33.816569   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 22:28:33.833473   66092 logs.go:123] Gathering logs for describe nodes ...
	I0505 22:28:33.833504   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0505 22:28:33.918836   66092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0505 22:28:33.918855   66092 logs.go:123] Gathering logs for CRI-O ...
	I0505 22:28:33.918870   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0505 22:28:34.002446   66092 logs.go:123] Gathering logs for container status ...
	I0505 22:28:34.002482   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 22:28:32.381957   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:28:34.382422   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:28:36.197229   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:28:38.198876   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:28:36.551031   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:28:36.565686   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0505 22:28:36.565744   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0505 22:28:36.609136   66092 cri.go:89] found id: ""
	I0505 22:28:36.609174   66092 logs.go:276] 0 containers: []
	W0505 22:28:36.609185   66092 logs.go:278] No container was found matching "kube-apiserver"
	I0505 22:28:36.609193   66092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0505 22:28:36.609253   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0505 22:28:36.648532   66092 cri.go:89] found id: ""
	I0505 22:28:36.648567   66092 logs.go:276] 0 containers: []
	W0505 22:28:36.648578   66092 logs.go:278] No container was found matching "etcd"
	I0505 22:28:36.648586   66092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0505 22:28:36.648649   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0505 22:28:36.685145   66092 cri.go:89] found id: ""
	I0505 22:28:36.685177   66092 logs.go:276] 0 containers: []
	W0505 22:28:36.685188   66092 logs.go:278] No container was found matching "coredns"
	I0505 22:28:36.685196   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0505 22:28:36.685317   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0505 22:28:36.728282   66092 cri.go:89] found id: ""
	I0505 22:28:36.728311   66092 logs.go:276] 0 containers: []
	W0505 22:28:36.728322   66092 logs.go:278] No container was found matching "kube-scheduler"
	I0505 22:28:36.728329   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0505 22:28:36.728390   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0505 22:28:36.769607   66092 cri.go:89] found id: ""
	I0505 22:28:36.769643   66092 logs.go:276] 0 containers: []
	W0505 22:28:36.769655   66092 logs.go:278] No container was found matching "kube-proxy"
	I0505 22:28:36.769663   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0505 22:28:36.769746   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0505 22:28:36.818492   66092 cri.go:89] found id: ""
	I0505 22:28:36.818518   66092 logs.go:276] 0 containers: []
	W0505 22:28:36.818526   66092 logs.go:278] No container was found matching "kube-controller-manager"
	I0505 22:28:36.818531   66092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0505 22:28:36.818589   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0505 22:28:36.857697   66092 cri.go:89] found id: ""
	I0505 22:28:36.857730   66092 logs.go:276] 0 containers: []
	W0505 22:28:36.857741   66092 logs.go:278] No container was found matching "kindnet"
	I0505 22:28:36.857747   66092 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0505 22:28:36.857793   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0505 22:28:36.898488   66092 cri.go:89] found id: ""
	I0505 22:28:36.898518   66092 logs.go:276] 0 containers: []
	W0505 22:28:36.898529   66092 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0505 22:28:36.898539   66092 logs.go:123] Gathering logs for dmesg ...
	I0505 22:28:36.898555   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 22:28:36.914128   66092 logs.go:123] Gathering logs for describe nodes ...
	I0505 22:28:36.914155   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0505 22:28:36.996492   66092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0505 22:28:36.996519   66092 logs.go:123] Gathering logs for CRI-O ...
	I0505 22:28:36.996534   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0505 22:28:37.075210   66092 logs.go:123] Gathering logs for container status ...
	I0505 22:28:37.075250   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 22:28:37.118628   66092 logs.go:123] Gathering logs for kubelet ...
	I0505 22:28:37.118665   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 22:28:39.669674   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:28:39.684083   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0505 22:28:39.684153   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0505 22:28:39.726552   66092 cri.go:89] found id: ""
	I0505 22:28:39.726582   66092 logs.go:276] 0 containers: []
	W0505 22:28:39.726591   66092 logs.go:278] No container was found matching "kube-apiserver"
	I0505 22:28:39.726597   66092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0505 22:28:39.726663   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0505 22:28:39.763666   66092 cri.go:89] found id: ""
	I0505 22:28:39.763699   66092 logs.go:276] 0 containers: []
	W0505 22:28:39.763710   66092 logs.go:278] No container was found matching "etcd"
	I0505 22:28:39.763727   66092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0505 22:28:39.763791   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0505 22:28:39.805758   66092 cri.go:89] found id: ""
	I0505 22:28:39.805786   66092 logs.go:276] 0 containers: []
	W0505 22:28:39.805797   66092 logs.go:278] No container was found matching "coredns"
	I0505 22:28:39.805804   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0505 22:28:39.805868   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0505 22:28:39.847937   66092 cri.go:89] found id: ""
	I0505 22:28:39.847971   66092 logs.go:276] 0 containers: []
	W0505 22:28:39.847982   66092 logs.go:278] No container was found matching "kube-scheduler"
	I0505 22:28:39.847989   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0505 22:28:39.848047   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0505 22:28:39.892151   66092 cri.go:89] found id: ""
	I0505 22:28:39.892180   66092 logs.go:276] 0 containers: []
	W0505 22:28:39.892188   66092 logs.go:278] No container was found matching "kube-proxy"
	I0505 22:28:39.892193   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0505 22:28:39.892239   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0505 22:28:39.930248   66092 cri.go:89] found id: ""
	I0505 22:28:39.930276   66092 logs.go:276] 0 containers: []
	W0505 22:28:39.930286   66092 logs.go:278] No container was found matching "kube-controller-manager"
	I0505 22:28:39.930293   66092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0505 22:28:39.930361   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0505 22:28:39.973748   66092 cri.go:89] found id: ""
	I0505 22:28:39.973783   66092 logs.go:276] 0 containers: []
	W0505 22:28:39.973792   66092 logs.go:278] No container was found matching "kindnet"
	I0505 22:28:39.973797   66092 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0505 22:28:39.973845   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0505 22:28:40.016476   66092 cri.go:89] found id: ""
	I0505 22:28:40.016504   66092 logs.go:276] 0 containers: []
	W0505 22:28:40.016515   66092 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0505 22:28:40.016525   66092 logs.go:123] Gathering logs for kubelet ...
	I0505 22:28:40.016539   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 22:28:40.069230   66092 logs.go:123] Gathering logs for dmesg ...
	I0505 22:28:40.069266   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 22:28:40.085277   66092 logs.go:123] Gathering logs for describe nodes ...
	I0505 22:28:40.085302   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0505 22:28:36.882767   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:28:39.382797   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:28:40.208860   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:28:42.698223   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	W0505 22:28:40.169549   66092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0505 22:28:40.169635   66092 logs.go:123] Gathering logs for CRI-O ...
	I0505 22:28:40.169658   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0505 22:28:40.251878   66092 logs.go:123] Gathering logs for container status ...
	I0505 22:28:40.251920   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 22:28:42.800101   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:28:42.814282   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0505 22:28:42.814346   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0505 22:28:42.854079   66092 cri.go:89] found id: ""
	I0505 22:28:42.854103   66092 logs.go:276] 0 containers: []
	W0505 22:28:42.854111   66092 logs.go:278] No container was found matching "kube-apiserver"
	I0505 22:28:42.854116   66092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0505 22:28:42.854161   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0505 22:28:42.896158   66092 cri.go:89] found id: ""
	I0505 22:28:42.896182   66092 logs.go:276] 0 containers: []
	W0505 22:28:42.896190   66092 logs.go:278] No container was found matching "etcd"
	I0505 22:28:42.896195   66092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0505 22:28:42.896245   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0505 22:28:42.935310   66092 cri.go:89] found id: ""
	I0505 22:28:42.935339   66092 logs.go:276] 0 containers: []
	W0505 22:28:42.935349   66092 logs.go:278] No container was found matching "coredns"
	I0505 22:28:42.935355   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0505 22:28:42.935419   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0505 22:28:42.978436   66092 cri.go:89] found id: ""
	I0505 22:28:42.978464   66092 logs.go:276] 0 containers: []
	W0505 22:28:42.978474   66092 logs.go:278] No container was found matching "kube-scheduler"
	I0505 22:28:42.978482   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0505 22:28:42.978545   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0505 22:28:43.025030   66092 cri.go:89] found id: ""
	I0505 22:28:43.025078   66092 logs.go:276] 0 containers: []
	W0505 22:28:43.025089   66092 logs.go:278] No container was found matching "kube-proxy"
	I0505 22:28:43.025098   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0505 22:28:43.025168   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0505 22:28:43.071563   66092 cri.go:89] found id: ""
	I0505 22:28:43.071599   66092 logs.go:276] 0 containers: []
	W0505 22:28:43.071607   66092 logs.go:278] No container was found matching "kube-controller-manager"
	I0505 22:28:43.071615   66092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0505 22:28:43.071684   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0505 22:28:43.111447   66092 cri.go:89] found id: ""
	I0505 22:28:43.111470   66092 logs.go:276] 0 containers: []
	W0505 22:28:43.111487   66092 logs.go:278] No container was found matching "kindnet"
	I0505 22:28:43.111495   66092 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0505 22:28:43.111547   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0505 22:28:43.152244   66092 cri.go:89] found id: ""
	I0505 22:28:43.152266   66092 logs.go:276] 0 containers: []
	W0505 22:28:43.152273   66092 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0505 22:28:43.152281   66092 logs.go:123] Gathering logs for kubelet ...
	I0505 22:28:43.152303   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 22:28:43.206786   66092 logs.go:123] Gathering logs for dmesg ...
	I0505 22:28:43.206815   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 22:28:43.224277   66092 logs.go:123] Gathering logs for describe nodes ...
	I0505 22:28:43.224307   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0505 22:28:43.303350   66092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0505 22:28:43.303377   66092 logs.go:123] Gathering logs for CRI-O ...
	I0505 22:28:43.303393   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0505 22:28:43.388787   66092 logs.go:123] Gathering logs for container status ...
	I0505 22:28:43.388825   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 22:28:41.383536   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:28:43.881696   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:28:45.196612   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:28:47.196885   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:28:49.198069   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:28:45.946757   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:28:45.963352   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0505 22:28:45.963432   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0505 22:28:46.002615   66092 cri.go:89] found id: ""
	I0505 22:28:46.002653   66092 logs.go:276] 0 containers: []
	W0505 22:28:46.002665   66092 logs.go:278] No container was found matching "kube-apiserver"
	I0505 22:28:46.002672   66092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0505 22:28:46.002735   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0505 22:28:46.047467   66092 cri.go:89] found id: ""
	I0505 22:28:46.047503   66092 logs.go:276] 0 containers: []
	W0505 22:28:46.047515   66092 logs.go:278] No container was found matching "etcd"
	I0505 22:28:46.047522   66092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0505 22:28:46.047585   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0505 22:28:46.086175   66092 cri.go:89] found id: ""
	I0505 22:28:46.086205   66092 logs.go:276] 0 containers: []
	W0505 22:28:46.086217   66092 logs.go:278] No container was found matching "coredns"
	I0505 22:28:46.086224   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0505 22:28:46.086288   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0505 22:28:46.129545   66092 cri.go:89] found id: ""
	I0505 22:28:46.129575   66092 logs.go:276] 0 containers: []
	W0505 22:28:46.129586   66092 logs.go:278] No container was found matching "kube-scheduler"
	I0505 22:28:46.129593   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0505 22:28:46.129651   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0505 22:28:46.175701   66092 cri.go:89] found id: ""
	I0505 22:28:46.175728   66092 logs.go:276] 0 containers: []
	W0505 22:28:46.175735   66092 logs.go:278] No container was found matching "kube-proxy"
	I0505 22:28:46.175750   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0505 22:28:46.175819   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0505 22:28:46.223058   66092 cri.go:89] found id: ""
	I0505 22:28:46.223090   66092 logs.go:276] 0 containers: []
	W0505 22:28:46.223100   66092 logs.go:278] No container was found matching "kube-controller-manager"
	I0505 22:28:46.223108   66092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0505 22:28:46.223208   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0505 22:28:46.266296   66092 cri.go:89] found id: ""
	I0505 22:28:46.266326   66092 logs.go:276] 0 containers: []
	W0505 22:28:46.266337   66092 logs.go:278] No container was found matching "kindnet"
	I0505 22:28:46.266344   66092 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0505 22:28:46.266406   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0505 22:28:46.304290   66092 cri.go:89] found id: ""
	I0505 22:28:46.304319   66092 logs.go:276] 0 containers: []
	W0505 22:28:46.304329   66092 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0505 22:28:46.304340   66092 logs.go:123] Gathering logs for CRI-O ...
	I0505 22:28:46.304357   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0505 22:28:46.385101   66092 logs.go:123] Gathering logs for container status ...
	I0505 22:28:46.385132   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 22:28:46.432518   66092 logs.go:123] Gathering logs for kubelet ...
	I0505 22:28:46.432566   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 22:28:46.486747   66092 logs.go:123] Gathering logs for dmesg ...
	I0505 22:28:46.486800   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 22:28:46.503912   66092 logs.go:123] Gathering logs for describe nodes ...
	I0505 22:28:46.503945   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0505 22:28:46.582469   66092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0505 22:28:49.083158   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:28:49.097830   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0505 22:28:49.097904   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0505 22:28:49.145006   66092 cri.go:89] found id: ""
	I0505 22:28:49.145033   66092 logs.go:276] 0 containers: []
	W0505 22:28:49.145050   66092 logs.go:278] No container was found matching "kube-apiserver"
	I0505 22:28:49.145056   66092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0505 22:28:49.145106   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0505 22:28:49.191997   66092 cri.go:89] found id: ""
	I0505 22:28:49.192026   66092 logs.go:276] 0 containers: []
	W0505 22:28:49.192044   66092 logs.go:278] No container was found matching "etcd"
	I0505 22:28:49.192051   66092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0505 22:28:49.192117   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0505 22:28:49.233381   66092 cri.go:89] found id: ""
	I0505 22:28:49.233410   66092 logs.go:276] 0 containers: []
	W0505 22:28:49.233421   66092 logs.go:278] No container was found matching "coredns"
	I0505 22:28:49.233428   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0505 22:28:49.233479   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0505 22:28:49.274452   66092 cri.go:89] found id: ""
	I0505 22:28:49.274481   66092 logs.go:276] 0 containers: []
	W0505 22:28:49.274492   66092 logs.go:278] No container was found matching "kube-scheduler"
	I0505 22:28:49.274499   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0505 22:28:49.274563   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0505 22:28:49.318224   66092 cri.go:89] found id: ""
	I0505 22:28:49.318259   66092 logs.go:276] 0 containers: []
	W0505 22:28:49.318270   66092 logs.go:278] No container was found matching "kube-proxy"
	I0505 22:28:49.318277   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0505 22:28:49.318332   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0505 22:28:49.364630   66092 cri.go:89] found id: ""
	I0505 22:28:49.364653   66092 logs.go:276] 0 containers: []
	W0505 22:28:49.364660   66092 logs.go:278] No container was found matching "kube-controller-manager"
	I0505 22:28:49.364666   66092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0505 22:28:49.364779   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0505 22:28:49.410486   66092 cri.go:89] found id: ""
	I0505 22:28:49.410516   66092 logs.go:276] 0 containers: []
	W0505 22:28:49.410527   66092 logs.go:278] No container was found matching "kindnet"
	I0505 22:28:49.410534   66092 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0505 22:28:49.410589   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0505 22:28:49.450342   66092 cri.go:89] found id: ""
	I0505 22:28:49.450375   66092 logs.go:276] 0 containers: []
	W0505 22:28:49.450387   66092 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0505 22:28:49.450400   66092 logs.go:123] Gathering logs for describe nodes ...
	I0505 22:28:49.450416   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0505 22:28:49.533087   66092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0505 22:28:49.533112   66092 logs.go:123] Gathering logs for CRI-O ...
	I0505 22:28:49.533128   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0505 22:28:49.612055   66092 logs.go:123] Gathering logs for container status ...
	I0505 22:28:49.612089   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 22:28:49.658257   66092 logs.go:123] Gathering logs for kubelet ...
	I0505 22:28:49.658282   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 22:28:49.710093   66092 logs.go:123] Gathering logs for dmesg ...
	I0505 22:28:49.710127   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 22:28:45.885416   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:28:48.382538   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:28:51.198740   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:28:53.198783   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:28:52.226541   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:28:52.241354   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0505 22:28:52.241438   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0505 22:28:52.281211   66092 cri.go:89] found id: ""
	I0505 22:28:52.281236   66092 logs.go:276] 0 containers: []
	W0505 22:28:52.281249   66092 logs.go:278] No container was found matching "kube-apiserver"
	I0505 22:28:52.281254   66092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0505 22:28:52.281302   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0505 22:28:52.323497   66092 cri.go:89] found id: ""
	I0505 22:28:52.323526   66092 logs.go:276] 0 containers: []
	W0505 22:28:52.323537   66092 logs.go:278] No container was found matching "etcd"
	I0505 22:28:52.323543   66092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0505 22:28:52.323597   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0505 22:28:52.367746   66092 cri.go:89] found id: ""
	I0505 22:28:52.367771   66092 logs.go:276] 0 containers: []
	W0505 22:28:52.367782   66092 logs.go:278] No container was found matching "coredns"
	I0505 22:28:52.367790   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0505 22:28:52.367851   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0505 22:28:52.413834   66092 cri.go:89] found id: ""
	I0505 22:28:52.413861   66092 logs.go:276] 0 containers: []
	W0505 22:28:52.413869   66092 logs.go:278] No container was found matching "kube-scheduler"
	I0505 22:28:52.413881   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0505 22:28:52.413941   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0505 22:28:52.455794   66092 cri.go:89] found id: ""
	I0505 22:28:52.455821   66092 logs.go:276] 0 containers: []
	W0505 22:28:52.455831   66092 logs.go:278] No container was found matching "kube-proxy"
	I0505 22:28:52.455836   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0505 22:28:52.455884   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0505 22:28:52.496844   66092 cri.go:89] found id: ""
	I0505 22:28:52.496869   66092 logs.go:276] 0 containers: []
	W0505 22:28:52.496880   66092 logs.go:278] No container was found matching "kube-controller-manager"
	I0505 22:28:52.496887   66092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0505 22:28:52.496947   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0505 22:28:52.536831   66092 cri.go:89] found id: ""
	I0505 22:28:52.536863   66092 logs.go:276] 0 containers: []
	W0505 22:28:52.536874   66092 logs.go:278] No container was found matching "kindnet"
	I0505 22:28:52.536881   66092 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0505 22:28:52.536941   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0505 22:28:52.582127   66092 cri.go:89] found id: ""
	I0505 22:28:52.582156   66092 logs.go:276] 0 containers: []
	W0505 22:28:52.582166   66092 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0505 22:28:52.582185   66092 logs.go:123] Gathering logs for CRI-O ...
	I0505 22:28:52.582200   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0505 22:28:52.667855   66092 logs.go:123] Gathering logs for container status ...
	I0505 22:28:52.667896   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 22:28:52.718032   66092 logs.go:123] Gathering logs for kubelet ...
	I0505 22:28:52.718063   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 22:28:52.769481   66092 logs.go:123] Gathering logs for dmesg ...
	I0505 22:28:52.769514   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 22:28:52.784990   66092 logs.go:123] Gathering logs for describe nodes ...
	I0505 22:28:52.785016   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0505 22:28:52.867571   66092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0505 22:28:50.883051   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:28:52.883260   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:28:54.883831   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:28:55.698209   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:28:58.199130   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:28:55.368740   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:28:55.386119   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0505 22:28:55.386175   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0505 22:28:55.431641   66092 cri.go:89] found id: ""
	I0505 22:28:55.431667   66092 logs.go:276] 0 containers: []
	W0505 22:28:55.431677   66092 logs.go:278] No container was found matching "kube-apiserver"
	I0505 22:28:55.431689   66092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0505 22:28:55.431754   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0505 22:28:55.472459   66092 cri.go:89] found id: ""
	I0505 22:28:55.472486   66092 logs.go:276] 0 containers: []
	W0505 22:28:55.472496   66092 logs.go:278] No container was found matching "etcd"
	I0505 22:28:55.472503   66092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0505 22:28:55.472565   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0505 22:28:55.514252   66092 cri.go:89] found id: ""
	I0505 22:28:55.514275   66092 logs.go:276] 0 containers: []
	W0505 22:28:55.514283   66092 logs.go:278] No container was found matching "coredns"
	I0505 22:28:55.514288   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0505 22:28:55.514342   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0505 22:28:55.573200   66092 cri.go:89] found id: ""
	I0505 22:28:55.573228   66092 logs.go:276] 0 containers: []
	W0505 22:28:55.573240   66092 logs.go:278] No container was found matching "kube-scheduler"
	I0505 22:28:55.573247   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0505 22:28:55.573310   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0505 22:28:55.637985   66092 cri.go:89] found id: ""
	I0505 22:28:55.638013   66092 logs.go:276] 0 containers: []
	W0505 22:28:55.638025   66092 logs.go:278] No container was found matching "kube-proxy"
	I0505 22:28:55.638032   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0505 22:28:55.638093   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0505 22:28:55.709103   66092 cri.go:89] found id: ""
	I0505 22:28:55.709126   66092 logs.go:276] 0 containers: []
	W0505 22:28:55.709137   66092 logs.go:278] No container was found matching "kube-controller-manager"
	I0505 22:28:55.709143   66092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0505 22:28:55.709199   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0505 22:28:55.746674   66092 cri.go:89] found id: ""
	I0505 22:28:55.746703   66092 logs.go:276] 0 containers: []
	W0505 22:28:55.746713   66092 logs.go:278] No container was found matching "kindnet"
	I0505 22:28:55.746719   66092 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0505 22:28:55.746777   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0505 22:28:55.786994   66092 cri.go:89] found id: ""
	I0505 22:28:55.787021   66092 logs.go:276] 0 containers: []
	W0505 22:28:55.787031   66092 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0505 22:28:55.787041   66092 logs.go:123] Gathering logs for kubelet ...
	I0505 22:28:55.787057   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 22:28:55.841876   66092 logs.go:123] Gathering logs for dmesg ...
	I0505 22:28:55.841913   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 22:28:55.858062   66092 logs.go:123] Gathering logs for describe nodes ...
	I0505 22:28:55.858098   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0505 22:28:55.943510   66092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0505 22:28:55.943531   66092 logs.go:123] Gathering logs for CRI-O ...
	I0505 22:28:55.943547   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0505 22:28:56.035491   66092 logs.go:123] Gathering logs for container status ...
	I0505 22:28:56.035530   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 22:28:58.582733   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:28:58.599530   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0505 22:28:58.599595   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0505 22:28:58.650146   66092 cri.go:89] found id: ""
	I0505 22:28:58.650172   66092 logs.go:276] 0 containers: []
	W0505 22:28:58.650180   66092 logs.go:278] No container was found matching "kube-apiserver"
	I0505 22:28:58.650185   66092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0505 22:28:58.650248   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0505 22:28:58.700629   66092 cri.go:89] found id: ""
	I0505 22:28:58.700653   66092 logs.go:276] 0 containers: []
	W0505 22:28:58.700661   66092 logs.go:278] No container was found matching "etcd"
	I0505 22:28:58.700665   66092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0505 22:28:58.700714   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0505 22:28:58.743353   66092 cri.go:89] found id: ""
	I0505 22:28:58.743384   66092 logs.go:276] 0 containers: []
	W0505 22:28:58.743394   66092 logs.go:278] No container was found matching "coredns"
	I0505 22:28:58.743400   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0505 22:28:58.743462   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0505 22:28:58.786632   66092 cri.go:89] found id: ""
	I0505 22:28:58.786659   66092 logs.go:276] 0 containers: []
	W0505 22:28:58.786667   66092 logs.go:278] No container was found matching "kube-scheduler"
	I0505 22:28:58.786677   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0505 22:28:58.786735   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0505 22:28:58.832022   66092 cri.go:89] found id: ""
	I0505 22:28:58.832044   66092 logs.go:276] 0 containers: []
	W0505 22:28:58.832052   66092 logs.go:278] No container was found matching "kube-proxy"
	I0505 22:28:58.832057   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0505 22:28:58.832118   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0505 22:28:58.876198   66092 cri.go:89] found id: ""
	I0505 22:28:58.876226   66092 logs.go:276] 0 containers: []
	W0505 22:28:58.876237   66092 logs.go:278] No container was found matching "kube-controller-manager"
	I0505 22:28:58.876245   66092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0505 22:28:58.876308   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0505 22:28:58.916525   66092 cri.go:89] found id: ""
	I0505 22:28:58.916549   66092 logs.go:276] 0 containers: []
	W0505 22:28:58.916557   66092 logs.go:278] No container was found matching "kindnet"
	I0505 22:28:58.916562   66092 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0505 22:28:58.916616   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0505 22:28:58.956443   66092 cri.go:89] found id: ""
	I0505 22:28:58.956476   66092 logs.go:276] 0 containers: []
	W0505 22:28:58.956487   66092 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0505 22:28:58.956497   66092 logs.go:123] Gathering logs for kubelet ...
	I0505 22:28:58.956513   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 22:28:59.010811   66092 logs.go:123] Gathering logs for dmesg ...
	I0505 22:28:59.010851   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 22:28:59.026978   66092 logs.go:123] Gathering logs for describe nodes ...
	I0505 22:28:59.027008   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0505 22:28:59.115788   66092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0505 22:28:59.115808   66092 logs.go:123] Gathering logs for CRI-O ...
	I0505 22:28:59.115820   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0505 22:28:59.199988   66092 logs.go:123] Gathering logs for container status ...
	I0505 22:28:59.200021   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 22:28:57.383050   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:28:59.384113   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:29:00.700542   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:29:03.197390   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:29:01.748780   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:29:01.765142   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0505 22:29:01.765220   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0505 22:29:01.807950   66092 cri.go:89] found id: ""
	I0505 22:29:01.807979   66092 logs.go:276] 0 containers: []
	W0505 22:29:01.807990   66092 logs.go:278] No container was found matching "kube-apiserver"
	I0505 22:29:01.807997   66092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0505 22:29:01.808059   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0505 22:29:01.849383   66092 cri.go:89] found id: ""
	I0505 22:29:01.849409   66092 logs.go:276] 0 containers: []
	W0505 22:29:01.849421   66092 logs.go:278] No container was found matching "etcd"
	I0505 22:29:01.849428   66092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0505 22:29:01.849494   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0505 22:29:01.893043   66092 cri.go:89] found id: ""
	I0505 22:29:01.893068   66092 logs.go:276] 0 containers: []
	W0505 22:29:01.893075   66092 logs.go:278] No container was found matching "coredns"
	I0505 22:29:01.893080   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0505 22:29:01.893144   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0505 22:29:01.933822   66092 cri.go:89] found id: ""
	I0505 22:29:01.933850   66092 logs.go:276] 0 containers: []
	W0505 22:29:01.933861   66092 logs.go:278] No container was found matching "kube-scheduler"
	I0505 22:29:01.933867   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0505 22:29:01.933938   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0505 22:29:01.972494   66092 cri.go:89] found id: ""
	I0505 22:29:01.972516   66092 logs.go:276] 0 containers: []
	W0505 22:29:01.972523   66092 logs.go:278] No container was found matching "kube-proxy"
	I0505 22:29:01.972528   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0505 22:29:01.972594   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0505 22:29:02.011331   66092 cri.go:89] found id: ""
	I0505 22:29:02.011355   66092 logs.go:276] 0 containers: []
	W0505 22:29:02.011362   66092 logs.go:278] No container was found matching "kube-controller-manager"
	I0505 22:29:02.011368   66092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0505 22:29:02.011426   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0505 22:29:02.051338   66092 cri.go:89] found id: ""
	I0505 22:29:02.051365   66092 logs.go:276] 0 containers: []
	W0505 22:29:02.051375   66092 logs.go:278] No container was found matching "kindnet"
	I0505 22:29:02.051382   66092 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0505 22:29:02.051444   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0505 22:29:02.091187   66092 cri.go:89] found id: ""
	I0505 22:29:02.091219   66092 logs.go:276] 0 containers: []
	W0505 22:29:02.091227   66092 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0505 22:29:02.091235   66092 logs.go:123] Gathering logs for kubelet ...
	I0505 22:29:02.091246   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 22:29:02.145913   66092 logs.go:123] Gathering logs for dmesg ...
	I0505 22:29:02.145957   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 22:29:02.163174   66092 logs.go:123] Gathering logs for describe nodes ...
	I0505 22:29:02.163198   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0505 22:29:02.250281   66092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0505 22:29:02.250299   66092 logs.go:123] Gathering logs for CRI-O ...
	I0505 22:29:02.250312   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0505 22:29:02.331752   66092 logs.go:123] Gathering logs for container status ...
	I0505 22:29:02.331785   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 22:29:04.880893   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:29:04.896060   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0505 22:29:04.896131   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0505 22:29:04.935796   66092 cri.go:89] found id: ""
	I0505 22:29:04.935823   66092 logs.go:276] 0 containers: []
	W0505 22:29:04.935833   66092 logs.go:278] No container was found matching "kube-apiserver"
	I0505 22:29:04.935840   66092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0505 22:29:04.935910   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0505 22:29:04.974599   66092 cri.go:89] found id: ""
	I0505 22:29:04.974623   66092 logs.go:276] 0 containers: []
	W0505 22:29:04.974638   66092 logs.go:278] No container was found matching "etcd"
	I0505 22:29:04.974645   66092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0505 22:29:04.974697   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0505 22:29:05.014926   66092 cri.go:89] found id: ""
	I0505 22:29:05.014952   66092 logs.go:276] 0 containers: []
	W0505 22:29:05.014962   66092 logs.go:278] No container was found matching "coredns"
	I0505 22:29:05.014969   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0505 22:29:05.015039   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0505 22:29:05.054127   66092 cri.go:89] found id: ""
	I0505 22:29:05.054153   66092 logs.go:276] 0 containers: []
	W0505 22:29:05.054161   66092 logs.go:278] No container was found matching "kube-scheduler"
	I0505 22:29:05.054166   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0505 22:29:05.054215   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0505 22:29:05.090668   66092 cri.go:89] found id: ""
	I0505 22:29:05.090693   66092 logs.go:276] 0 containers: []
	W0505 22:29:05.090704   66092 logs.go:278] No container was found matching "kube-proxy"
	I0505 22:29:05.090710   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0505 22:29:05.090768   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0505 22:29:01.890406   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:29:04.381541   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:29:05.198849   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:29:07.697538   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:29:09.699229   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:29:05.128050   66092 cri.go:89] found id: ""
	I0505 22:29:05.128089   66092 logs.go:276] 0 containers: []
	W0505 22:29:05.128097   66092 logs.go:278] No container was found matching "kube-controller-manager"
	I0505 22:29:05.128102   66092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0505 22:29:05.128164   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0505 22:29:05.172237   66092 cri.go:89] found id: ""
	I0505 22:29:05.172261   66092 logs.go:276] 0 containers: []
	W0505 22:29:05.172273   66092 logs.go:278] No container was found matching "kindnet"
	I0505 22:29:05.172281   66092 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0505 22:29:05.172346   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0505 22:29:05.210990   66092 cri.go:89] found id: ""
	I0505 22:29:05.211016   66092 logs.go:276] 0 containers: []
	W0505 22:29:05.211024   66092 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0505 22:29:05.211034   66092 logs.go:123] Gathering logs for container status ...
	I0505 22:29:05.211050   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 22:29:05.255003   66092 logs.go:123] Gathering logs for kubelet ...
	I0505 22:29:05.255029   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 22:29:05.309203   66092 logs.go:123] Gathering logs for dmesg ...
	I0505 22:29:05.309244   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 22:29:05.324669   66092 logs.go:123] Gathering logs for describe nodes ...
	I0505 22:29:05.324701   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0505 22:29:05.403165   66092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0505 22:29:05.403187   66092 logs.go:123] Gathering logs for CRI-O ...
	I0505 22:29:05.403208   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0505 22:29:07.985084   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:29:08.000298   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0505 22:29:08.000376   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0505 22:29:08.051283   66092 cri.go:89] found id: ""
	I0505 22:29:08.051315   66092 logs.go:276] 0 containers: []
	W0505 22:29:08.051325   66092 logs.go:278] No container was found matching "kube-apiserver"
	I0505 22:29:08.051333   66092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0505 22:29:08.051399   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0505 22:29:08.092541   66092 cri.go:89] found id: ""
	I0505 22:29:08.092571   66092 logs.go:276] 0 containers: []
	W0505 22:29:08.092581   66092 logs.go:278] No container was found matching "etcd"
	I0505 22:29:08.092588   66092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0505 22:29:08.092657   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0505 22:29:08.130358   66092 cri.go:89] found id: ""
	I0505 22:29:08.130383   66092 logs.go:276] 0 containers: []
	W0505 22:29:08.130391   66092 logs.go:278] No container was found matching "coredns"
	I0505 22:29:08.130396   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0505 22:29:08.130452   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0505 22:29:08.173179   66092 cri.go:89] found id: ""
	I0505 22:29:08.173208   66092 logs.go:276] 0 containers: []
	W0505 22:29:08.173215   66092 logs.go:278] No container was found matching "kube-scheduler"
	I0505 22:29:08.173221   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0505 22:29:08.173283   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0505 22:29:08.214138   66092 cri.go:89] found id: ""
	I0505 22:29:08.214167   66092 logs.go:276] 0 containers: []
	W0505 22:29:08.214180   66092 logs.go:278] No container was found matching "kube-proxy"
	I0505 22:29:08.214197   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0505 22:29:08.214272   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0505 22:29:08.255287   66092 cri.go:89] found id: ""
	I0505 22:29:08.255312   66092 logs.go:276] 0 containers: []
	W0505 22:29:08.255320   66092 logs.go:278] No container was found matching "kube-controller-manager"
	I0505 22:29:08.255326   66092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0505 22:29:08.255375   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0505 22:29:08.300088   66092 cri.go:89] found id: ""
	I0505 22:29:08.300111   66092 logs.go:276] 0 containers: []
	W0505 22:29:08.300119   66092 logs.go:278] No container was found matching "kindnet"
	I0505 22:29:08.300124   66092 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0505 22:29:08.300184   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0505 22:29:08.339880   66092 cri.go:89] found id: ""
	I0505 22:29:08.339904   66092 logs.go:276] 0 containers: []
	W0505 22:29:08.339912   66092 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0505 22:29:08.339922   66092 logs.go:123] Gathering logs for dmesg ...
	I0505 22:29:08.339937   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 22:29:08.356053   66092 logs.go:123] Gathering logs for describe nodes ...
	I0505 22:29:08.356087   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0505 22:29:08.434272   66092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0505 22:29:08.434293   66092 logs.go:123] Gathering logs for CRI-O ...
	I0505 22:29:08.434310   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0505 22:29:08.513479   66092 logs.go:123] Gathering logs for container status ...
	I0505 22:29:08.513519   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 22:29:08.558469   66092 logs.go:123] Gathering logs for kubelet ...
	I0505 22:29:08.558502   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 22:29:06.382087   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:29:08.382819   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:29:10.384226   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:29:12.197693   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:29:14.198845   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:29:11.111081   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:29:11.126283   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0505 22:29:11.126372   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0505 22:29:11.179511   66092 cri.go:89] found id: ""
	I0505 22:29:11.179535   66092 logs.go:276] 0 containers: []
	W0505 22:29:11.179543   66092 logs.go:278] No container was found matching "kube-apiserver"
	I0505 22:29:11.179547   66092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0505 22:29:11.179619   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0505 22:29:11.223629   66092 cri.go:89] found id: ""
	I0505 22:29:11.223656   66092 logs.go:276] 0 containers: []
	W0505 22:29:11.223667   66092 logs.go:278] No container was found matching "etcd"
	I0505 22:29:11.223674   66092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0505 22:29:11.223743   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0505 22:29:11.264554   66092 cri.go:89] found id: ""
	I0505 22:29:11.264587   66092 logs.go:276] 0 containers: []
	W0505 22:29:11.264597   66092 logs.go:278] No container was found matching "coredns"
	I0505 22:29:11.264605   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0505 22:29:11.264671   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0505 22:29:11.306958   66092 cri.go:89] found id: ""
	I0505 22:29:11.306983   66092 logs.go:276] 0 containers: []
	W0505 22:29:11.306991   66092 logs.go:278] No container was found matching "kube-scheduler"
	I0505 22:29:11.306997   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0505 22:29:11.307063   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0505 22:29:11.347971   66092 cri.go:89] found id: ""
	I0505 22:29:11.347995   66092 logs.go:276] 0 containers: []
	W0505 22:29:11.348002   66092 logs.go:278] No container was found matching "kube-proxy"
	I0505 22:29:11.348007   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0505 22:29:11.348054   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0505 22:29:11.385570   66092 cri.go:89] found id: ""
	I0505 22:29:11.385591   66092 logs.go:276] 0 containers: []
	W0505 22:29:11.385599   66092 logs.go:278] No container was found matching "kube-controller-manager"
	I0505 22:29:11.385603   66092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0505 22:29:11.385658   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0505 22:29:11.424820   66092 cri.go:89] found id: ""
	I0505 22:29:11.424843   66092 logs.go:276] 0 containers: []
	W0505 22:29:11.424851   66092 logs.go:278] No container was found matching "kindnet"
	I0505 22:29:11.424857   66092 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0505 22:29:11.424905   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0505 22:29:11.462821   66092 cri.go:89] found id: ""
	I0505 22:29:11.462848   66092 logs.go:276] 0 containers: []
	W0505 22:29:11.462856   66092 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0505 22:29:11.462864   66092 logs.go:123] Gathering logs for describe nodes ...
	I0505 22:29:11.462875   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0505 22:29:11.540194   66092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0505 22:29:11.540218   66092 logs.go:123] Gathering logs for CRI-O ...
	I0505 22:29:11.540230   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0505 22:29:11.620560   66092 logs.go:123] Gathering logs for container status ...
	I0505 22:29:11.620597   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 22:29:11.664939   66092 logs.go:123] Gathering logs for kubelet ...
	I0505 22:29:11.664973   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 22:29:11.716270   66092 logs.go:123] Gathering logs for dmesg ...
	I0505 22:29:11.716301   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 22:29:14.232208   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:29:14.247576   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0505 22:29:14.247656   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0505 22:29:14.289009   66092 cri.go:89] found id: ""
	I0505 22:29:14.289038   66092 logs.go:276] 0 containers: []
	W0505 22:29:14.289049   66092 logs.go:278] No container was found matching "kube-apiserver"
	I0505 22:29:14.289057   66092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0505 22:29:14.289122   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0505 22:29:14.327196   66092 cri.go:89] found id: ""
	I0505 22:29:14.327226   66092 logs.go:276] 0 containers: []
	W0505 22:29:14.327237   66092 logs.go:278] No container was found matching "etcd"
	I0505 22:29:14.327244   66092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0505 22:29:14.327310   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0505 22:29:14.366059   66092 cri.go:89] found id: ""
	I0505 22:29:14.366087   66092 logs.go:276] 0 containers: []
	W0505 22:29:14.366097   66092 logs.go:278] No container was found matching "coredns"
	I0505 22:29:14.366104   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0505 22:29:14.366163   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0505 22:29:14.406774   66092 cri.go:89] found id: ""
	I0505 22:29:14.406797   66092 logs.go:276] 0 containers: []
	W0505 22:29:14.406804   66092 logs.go:278] No container was found matching "kube-scheduler"
	I0505 22:29:14.406810   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0505 22:29:14.406855   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0505 22:29:14.446044   66092 cri.go:89] found id: ""
	I0505 22:29:14.446070   66092 logs.go:276] 0 containers: []
	W0505 22:29:14.446077   66092 logs.go:278] No container was found matching "kube-proxy"
	I0505 22:29:14.446083   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0505 22:29:14.446139   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0505 22:29:14.482477   66092 cri.go:89] found id: ""
	I0505 22:29:14.482502   66092 logs.go:276] 0 containers: []
	W0505 22:29:14.482510   66092 logs.go:278] No container was found matching "kube-controller-manager"
	I0505 22:29:14.482517   66092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0505 22:29:14.482571   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0505 22:29:14.523684   66092 cri.go:89] found id: ""
	I0505 22:29:14.523713   66092 logs.go:276] 0 containers: []
	W0505 22:29:14.523723   66092 logs.go:278] No container was found matching "kindnet"
	I0505 22:29:14.523729   66092 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0505 22:29:14.523781   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0505 22:29:14.567899   66092 cri.go:89] found id: ""
	I0505 22:29:14.567925   66092 logs.go:276] 0 containers: []
	W0505 22:29:14.567937   66092 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0505 22:29:14.567948   66092 logs.go:123] Gathering logs for kubelet ...
	I0505 22:29:14.567964   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 22:29:14.627514   66092 logs.go:123] Gathering logs for dmesg ...
	I0505 22:29:14.627545   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 22:29:14.642844   66092 logs.go:123] Gathering logs for describe nodes ...
	I0505 22:29:14.642889   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0505 22:29:14.732438   66092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0505 22:29:14.732461   66092 logs.go:123] Gathering logs for CRI-O ...
	I0505 22:29:14.732478   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0505 22:29:14.812125   66092 logs.go:123] Gathering logs for container status ...
	I0505 22:29:14.812157   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 22:29:12.882357   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:29:15.384430   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:29:16.199672   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:29:18.697783   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:29:17.365336   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:29:17.381758   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0505 22:29:17.381851   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0505 22:29:17.420054   66092 cri.go:89] found id: ""
	I0505 22:29:17.420079   66092 logs.go:276] 0 containers: []
	W0505 22:29:17.420087   66092 logs.go:278] No container was found matching "kube-apiserver"
	I0505 22:29:17.420092   66092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0505 22:29:17.420151   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0505 22:29:17.458110   66092 cri.go:89] found id: ""
	I0505 22:29:17.458137   66092 logs.go:276] 0 containers: []
	W0505 22:29:17.458144   66092 logs.go:278] No container was found matching "etcd"
	I0505 22:29:17.458149   66092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0505 22:29:17.458194   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0505 22:29:17.494553   66092 cri.go:89] found id: ""
	I0505 22:29:17.494573   66092 logs.go:276] 0 containers: []
	W0505 22:29:17.494581   66092 logs.go:278] No container was found matching "coredns"
	I0505 22:29:17.494586   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0505 22:29:17.494642   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0505 22:29:17.531633   66092 cri.go:89] found id: ""
	I0505 22:29:17.531662   66092 logs.go:276] 0 containers: []
	W0505 22:29:17.531674   66092 logs.go:278] No container was found matching "kube-scheduler"
	I0505 22:29:17.531681   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0505 22:29:17.531737   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0505 22:29:17.570546   66092 cri.go:89] found id: ""
	I0505 22:29:17.570572   66092 logs.go:276] 0 containers: []
	W0505 22:29:17.570580   66092 logs.go:278] No container was found matching "kube-proxy"
	I0505 22:29:17.570586   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0505 22:29:17.570648   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0505 22:29:17.608239   66092 cri.go:89] found id: ""
	I0505 22:29:17.608267   66092 logs.go:276] 0 containers: []
	W0505 22:29:17.608276   66092 logs.go:278] No container was found matching "kube-controller-manager"
	I0505 22:29:17.608282   66092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0505 22:29:17.608329   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0505 22:29:17.655107   66092 cri.go:89] found id: ""
	I0505 22:29:17.655137   66092 logs.go:276] 0 containers: []
	W0505 22:29:17.655148   66092 logs.go:278] No container was found matching "kindnet"
	I0505 22:29:17.655162   66092 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0505 22:29:17.655238   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0505 22:29:17.693976   66092 cri.go:89] found id: ""
	I0505 22:29:17.694006   66092 logs.go:276] 0 containers: []
	W0505 22:29:17.694016   66092 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0505 22:29:17.694027   66092 logs.go:123] Gathering logs for kubelet ...
	I0505 22:29:17.694044   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 22:29:17.750176   66092 logs.go:123] Gathering logs for dmesg ...
	I0505 22:29:17.750210   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 22:29:17.764721   66092 logs.go:123] Gathering logs for describe nodes ...
	I0505 22:29:17.764748   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0505 22:29:17.850115   66092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0505 22:29:17.850135   66092 logs.go:123] Gathering logs for CRI-O ...
	I0505 22:29:17.850148   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0505 22:29:17.933388   66092 logs.go:123] Gathering logs for container status ...
	I0505 22:29:17.933420   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 22:29:17.882011   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:29:19.883952   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:29:21.198033   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:29:23.697403   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:29:20.479441   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:29:20.497151   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0505 22:29:20.497218   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0505 22:29:20.538131   66092 cri.go:89] found id: ""
	I0505 22:29:20.538158   66092 logs.go:276] 0 containers: []
	W0505 22:29:20.538169   66092 logs.go:278] No container was found matching "kube-apiserver"
	I0505 22:29:20.538176   66092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0505 22:29:20.538227   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0505 22:29:20.583467   66092 cri.go:89] found id: ""
	I0505 22:29:20.583511   66092 logs.go:276] 0 containers: []
	W0505 22:29:20.583522   66092 logs.go:278] No container was found matching "etcd"
	I0505 22:29:20.583528   66092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0505 22:29:20.583582   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0505 22:29:20.624051   66092 cri.go:89] found id: ""
	I0505 22:29:20.624083   66092 logs.go:276] 0 containers: []
	W0505 22:29:20.624095   66092 logs.go:278] No container was found matching "coredns"
	I0505 22:29:20.624126   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0505 22:29:20.624194   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0505 22:29:20.665476   66092 cri.go:89] found id: ""
	I0505 22:29:20.665506   66092 logs.go:276] 0 containers: []
	W0505 22:29:20.665517   66092 logs.go:278] No container was found matching "kube-scheduler"
	I0505 22:29:20.665526   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0505 22:29:20.665593   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0505 22:29:20.708745   66092 cri.go:89] found id: ""
	I0505 22:29:20.708768   66092 logs.go:276] 0 containers: []
	W0505 22:29:20.708776   66092 logs.go:278] No container was found matching "kube-proxy"
	I0505 22:29:20.708781   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0505 22:29:20.708833   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0505 22:29:20.756496   66092 cri.go:89] found id: ""
	I0505 22:29:20.756522   66092 logs.go:276] 0 containers: []
	W0505 22:29:20.756530   66092 logs.go:278] No container was found matching "kube-controller-manager"
	I0505 22:29:20.756540   66092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0505 22:29:20.756592   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0505 22:29:20.800384   66092 cri.go:89] found id: ""
	I0505 22:29:20.800411   66092 logs.go:276] 0 containers: []
	W0505 22:29:20.800422   66092 logs.go:278] No container was found matching "kindnet"
	I0505 22:29:20.800428   66092 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0505 22:29:20.800490   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0505 22:29:20.839745   66092 cri.go:89] found id: ""
	I0505 22:29:20.839774   66092 logs.go:276] 0 containers: []
	W0505 22:29:20.839785   66092 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0505 22:29:20.839796   66092 logs.go:123] Gathering logs for kubelet ...
	I0505 22:29:20.839808   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 22:29:20.900943   66092 logs.go:123] Gathering logs for dmesg ...
	I0505 22:29:20.900976   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 22:29:20.915887   66092 logs.go:123] Gathering logs for describe nodes ...
	I0505 22:29:20.915913   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0505 22:29:20.994679   66092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0505 22:29:20.994696   66092 logs.go:123] Gathering logs for CRI-O ...
	I0505 22:29:20.994709   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0505 22:29:21.079391   66092 logs.go:123] Gathering logs for container status ...
	I0505 22:29:21.079435   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 22:29:23.625221   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:29:23.642374   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0505 22:29:23.642526   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0505 22:29:23.689730   66092 cri.go:89] found id: ""
	I0505 22:29:23.689755   66092 logs.go:276] 0 containers: []
	W0505 22:29:23.689763   66092 logs.go:278] No container was found matching "kube-apiserver"
	I0505 22:29:23.689770   66092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0505 22:29:23.689831   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0505 22:29:23.727597   66092 cri.go:89] found id: ""
	I0505 22:29:23.727622   66092 logs.go:276] 0 containers: []
	W0505 22:29:23.727631   66092 logs.go:278] No container was found matching "etcd"
	I0505 22:29:23.727638   66092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0505 22:29:23.727699   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0505 22:29:23.772081   66092 cri.go:89] found id: ""
	I0505 22:29:23.772111   66092 logs.go:276] 0 containers: []
	W0505 22:29:23.772127   66092 logs.go:278] No container was found matching "coredns"
	I0505 22:29:23.772136   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0505 22:29:23.772206   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0505 22:29:23.811407   66092 cri.go:89] found id: ""
	I0505 22:29:23.811436   66092 logs.go:276] 0 containers: []
	W0505 22:29:23.811446   66092 logs.go:278] No container was found matching "kube-scheduler"
	I0505 22:29:23.811453   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0505 22:29:23.811521   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0505 22:29:23.858788   66092 cri.go:89] found id: ""
	I0505 22:29:23.858813   66092 logs.go:276] 0 containers: []
	W0505 22:29:23.858821   66092 logs.go:278] No container was found matching "kube-proxy"
	I0505 22:29:23.858826   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0505 22:29:23.858881   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0505 22:29:23.933818   66092 cri.go:89] found id: ""
	I0505 22:29:23.933845   66092 logs.go:276] 0 containers: []
	W0505 22:29:23.933852   66092 logs.go:278] No container was found matching "kube-controller-manager"
	I0505 22:29:23.933858   66092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0505 22:29:23.933912   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0505 22:29:23.985747   66092 cri.go:89] found id: ""
	I0505 22:29:23.985774   66092 logs.go:276] 0 containers: []
	W0505 22:29:23.985785   66092 logs.go:278] No container was found matching "kindnet"
	I0505 22:29:23.985793   66092 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0505 22:29:23.985853   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0505 22:29:24.031558   66092 cri.go:89] found id: ""
	I0505 22:29:24.031587   66092 logs.go:276] 0 containers: []
	W0505 22:29:24.031599   66092 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0505 22:29:24.031611   66092 logs.go:123] Gathering logs for kubelet ...
	I0505 22:29:24.031626   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 22:29:24.089781   66092 logs.go:123] Gathering logs for dmesg ...
	I0505 22:29:24.089832   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 22:29:24.104841   66092 logs.go:123] Gathering logs for describe nodes ...
	I0505 22:29:24.104873   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0505 22:29:24.183308   66092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0505 22:29:24.183342   66092 logs.go:123] Gathering logs for CRI-O ...
	I0505 22:29:24.183361   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0505 22:29:24.269834   66092 logs.go:123] Gathering logs for container status ...
	I0505 22:29:24.269870   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 22:29:22.382237   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:29:24.386926   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:29:25.697967   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:29:27.698158   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:29:26.821893   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:29:26.839706   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0505 22:29:26.839783   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0505 22:29:26.882185   66092 cri.go:89] found id: ""
	I0505 22:29:26.882204   66092 logs.go:276] 0 containers: []
	W0505 22:29:26.882223   66092 logs.go:278] No container was found matching "kube-apiserver"
	I0505 22:29:26.882231   66092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0505 22:29:26.882294   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0505 22:29:26.927333   66092 cri.go:89] found id: ""
	I0505 22:29:26.927366   66092 logs.go:276] 0 containers: []
	W0505 22:29:26.927375   66092 logs.go:278] No container was found matching "etcd"
	I0505 22:29:26.927380   66092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0505 22:29:26.927445   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0505 22:29:26.970261   66092 cri.go:89] found id: ""
	I0505 22:29:26.970298   66092 logs.go:276] 0 containers: []
	W0505 22:29:26.970310   66092 logs.go:278] No container was found matching "coredns"
	I0505 22:29:26.970317   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0505 22:29:26.970378   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0505 22:29:27.013672   66092 cri.go:89] found id: ""
	I0505 22:29:27.013698   66092 logs.go:276] 0 containers: []
	W0505 22:29:27.013706   66092 logs.go:278] No container was found matching "kube-scheduler"
	I0505 22:29:27.013711   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0505 22:29:27.013780   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0505 22:29:27.056225   66092 cri.go:89] found id: ""
	I0505 22:29:27.056250   66092 logs.go:276] 0 containers: []
	W0505 22:29:27.056259   66092 logs.go:278] No container was found matching "kube-proxy"
	I0505 22:29:27.056265   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0505 22:29:27.056326   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0505 22:29:27.095342   66092 cri.go:89] found id: ""
	I0505 22:29:27.095363   66092 logs.go:276] 0 containers: []
	W0505 22:29:27.095371   66092 logs.go:278] No container was found matching "kube-controller-manager"
	I0505 22:29:27.095378   66092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0505 22:29:27.095424   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0505 22:29:27.136061   66092 cri.go:89] found id: ""
	I0505 22:29:27.136088   66092 logs.go:276] 0 containers: []
	W0505 22:29:27.136100   66092 logs.go:278] No container was found matching "kindnet"
	I0505 22:29:27.136106   66092 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0505 22:29:27.136164   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0505 22:29:27.175721   66092 cri.go:89] found id: ""
	I0505 22:29:27.175746   66092 logs.go:276] 0 containers: []
	W0505 22:29:27.175753   66092 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0505 22:29:27.175761   66092 logs.go:123] Gathering logs for container status ...
	I0505 22:29:27.175771   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 22:29:27.226603   66092 logs.go:123] Gathering logs for kubelet ...
	I0505 22:29:27.226639   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 22:29:27.283507   66092 logs.go:123] Gathering logs for dmesg ...
	I0505 22:29:27.283543   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 22:29:27.299210   66092 logs.go:123] Gathering logs for describe nodes ...
	I0505 22:29:27.299243   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0505 22:29:27.389596   66092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0505 22:29:27.389620   66092 logs.go:123] Gathering logs for CRI-O ...
	I0505 22:29:27.389633   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0505 22:29:29.973949   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:29:29.989421   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0505 22:29:29.989505   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0505 22:29:30.030042   66092 cri.go:89] found id: ""
	I0505 22:29:30.030068   66092 logs.go:276] 0 containers: []
	W0505 22:29:30.030077   66092 logs.go:278] No container was found matching "kube-apiserver"
	I0505 22:29:30.030084   66092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0505 22:29:30.030147   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0505 22:29:30.074475   66092 cri.go:89] found id: ""
	I0505 22:29:30.074498   66092 logs.go:276] 0 containers: []
	W0505 22:29:30.074506   66092 logs.go:278] No container was found matching "etcd"
	I0505 22:29:30.074511   66092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0505 22:29:30.074557   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0505 22:29:26.882136   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:29:28.882176   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:29:30.198875   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:29:32.699496   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:29:30.118556   66092 cri.go:89] found id: ""
	I0505 22:29:30.118584   66092 logs.go:276] 0 containers: []
	W0505 22:29:30.118592   66092 logs.go:278] No container was found matching "coredns"
	I0505 22:29:30.118597   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0505 22:29:30.118649   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0505 22:29:30.157668   66092 cri.go:89] found id: ""
	I0505 22:29:30.157701   66092 logs.go:276] 0 containers: []
	W0505 22:29:30.157712   66092 logs.go:278] No container was found matching "kube-scheduler"
	I0505 22:29:30.157720   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0505 22:29:30.157782   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0505 22:29:30.202916   66092 cri.go:89] found id: ""
	I0505 22:29:30.202936   66092 logs.go:276] 0 containers: []
	W0505 22:29:30.202944   66092 logs.go:278] No container was found matching "kube-proxy"
	I0505 22:29:30.202948   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0505 22:29:30.203009   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0505 22:29:30.244413   66092 cri.go:89] found id: ""
	I0505 22:29:30.244445   66092 logs.go:276] 0 containers: []
	W0505 22:29:30.244457   66092 logs.go:278] No container was found matching "kube-controller-manager"
	I0505 22:29:30.244471   66092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0505 22:29:30.244560   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0505 22:29:30.285333   66092 cri.go:89] found id: ""
	I0505 22:29:30.285364   66092 logs.go:276] 0 containers: []
	W0505 22:29:30.285373   66092 logs.go:278] No container was found matching "kindnet"
	I0505 22:29:30.285379   66092 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0505 22:29:30.285482   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0505 22:29:30.324945   66092 cri.go:89] found id: ""
	I0505 22:29:30.324970   66092 logs.go:276] 0 containers: []
	W0505 22:29:30.324978   66092 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0505 22:29:30.324986   66092 logs.go:123] Gathering logs for dmesg ...
	I0505 22:29:30.324998   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 22:29:30.340241   66092 logs.go:123] Gathering logs for describe nodes ...
	I0505 22:29:30.340272   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0505 22:29:30.420158   66092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0505 22:29:30.420181   66092 logs.go:123] Gathering logs for CRI-O ...
	I0505 22:29:30.420196   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0505 22:29:30.502515   66092 logs.go:123] Gathering logs for container status ...
	I0505 22:29:30.502557   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 22:29:30.556890   66092 logs.go:123] Gathering logs for kubelet ...
	I0505 22:29:30.556923   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 22:29:33.128305   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:29:33.145042   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0505 22:29:33.145116   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0505 22:29:33.195857   66092 cri.go:89] found id: ""
	I0505 22:29:33.195890   66092 logs.go:276] 0 containers: []
	W0505 22:29:33.195900   66092 logs.go:278] No container was found matching "kube-apiserver"
	I0505 22:29:33.195906   66092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0505 22:29:33.195971   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0505 22:29:33.239669   66092 cri.go:89] found id: ""
	I0505 22:29:33.239694   66092 logs.go:276] 0 containers: []
	W0505 22:29:33.239710   66092 logs.go:278] No container was found matching "etcd"
	I0505 22:29:33.239717   66092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0505 22:29:33.239767   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0505 22:29:33.286649   66092 cri.go:89] found id: ""
	I0505 22:29:33.286673   66092 logs.go:276] 0 containers: []
	W0505 22:29:33.286690   66092 logs.go:278] No container was found matching "coredns"
	I0505 22:29:33.286694   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0505 22:29:33.286765   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0505 22:29:33.334688   66092 cri.go:89] found id: ""
	I0505 22:29:33.334718   66092 logs.go:276] 0 containers: []
	W0505 22:29:33.334729   66092 logs.go:278] No container was found matching "kube-scheduler"
	I0505 22:29:33.334736   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0505 22:29:33.334806   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0505 22:29:33.373369   66092 cri.go:89] found id: ""
	I0505 22:29:33.373394   66092 logs.go:276] 0 containers: []
	W0505 22:29:33.373401   66092 logs.go:278] No container was found matching "kube-proxy"
	I0505 22:29:33.373408   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0505 22:29:33.373469   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0505 22:29:33.415711   66092 cri.go:89] found id: ""
	I0505 22:29:33.415734   66092 logs.go:276] 0 containers: []
	W0505 22:29:33.415742   66092 logs.go:278] No container was found matching "kube-controller-manager"
	I0505 22:29:33.415748   66092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0505 22:29:33.415813   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0505 22:29:33.457341   66092 cri.go:89] found id: ""
	I0505 22:29:33.457361   66092 logs.go:276] 0 containers: []
	W0505 22:29:33.457368   66092 logs.go:278] No container was found matching "kindnet"
	I0505 22:29:33.457373   66092 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0505 22:29:33.457417   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0505 22:29:33.494319   66092 cri.go:89] found id: ""
	I0505 22:29:33.494349   66092 logs.go:276] 0 containers: []
	W0505 22:29:33.494357   66092 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0505 22:29:33.494366   66092 logs.go:123] Gathering logs for describe nodes ...
	I0505 22:29:33.494381   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0505 22:29:33.574465   66092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0505 22:29:33.574489   66092 logs.go:123] Gathering logs for CRI-O ...
	I0505 22:29:33.574503   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0505 22:29:33.660091   66092 logs.go:123] Gathering logs for container status ...
	I0505 22:29:33.660132   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 22:29:33.704252   66092 logs.go:123] Gathering logs for kubelet ...
	I0505 22:29:33.704276   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 22:29:33.757223   66092 logs.go:123] Gathering logs for dmesg ...
	I0505 22:29:33.757253   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 22:29:30.882582   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:29:33.386065   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:29:35.197706   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:29:37.198169   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:29:39.198677   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:29:36.274389   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:29:36.291928   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0505 22:29:36.291990   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0505 22:29:36.356524   66092 cri.go:89] found id: ""
	I0505 22:29:36.356552   66092 logs.go:276] 0 containers: []
	W0505 22:29:36.356561   66092 logs.go:278] No container was found matching "kube-apiserver"
	I0505 22:29:36.356567   66092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0505 22:29:36.356651   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0505 22:29:36.406472   66092 cri.go:89] found id: ""
	I0505 22:29:36.406498   66092 logs.go:276] 0 containers: []
	W0505 22:29:36.406507   66092 logs.go:278] No container was found matching "etcd"
	I0505 22:29:36.406515   66092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0505 22:29:36.406577   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0505 22:29:36.463152   66092 cri.go:89] found id: ""
	I0505 22:29:36.463190   66092 logs.go:276] 0 containers: []
	W0505 22:29:36.463201   66092 logs.go:278] No container was found matching "coredns"
	I0505 22:29:36.463209   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0505 22:29:36.463277   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0505 22:29:36.500511   66092 cri.go:89] found id: ""
	I0505 22:29:36.500543   66092 logs.go:276] 0 containers: []
	W0505 22:29:36.500552   66092 logs.go:278] No container was found matching "kube-scheduler"
	I0505 22:29:36.500564   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0505 22:29:36.500621   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0505 22:29:36.537192   66092 cri.go:89] found id: ""
	I0505 22:29:36.537216   66092 logs.go:276] 0 containers: []
	W0505 22:29:36.537225   66092 logs.go:278] No container was found matching "kube-proxy"
	I0505 22:29:36.537231   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0505 22:29:36.537311   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0505 22:29:36.580945   66092 cri.go:89] found id: ""
	I0505 22:29:36.580973   66092 logs.go:276] 0 containers: []
	W0505 22:29:36.580987   66092 logs.go:278] No container was found matching "kube-controller-manager"
	I0505 22:29:36.580995   66092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0505 22:29:36.581068   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0505 22:29:36.622752   66092 cri.go:89] found id: ""
	I0505 22:29:36.622784   66092 logs.go:276] 0 containers: []
	W0505 22:29:36.622795   66092 logs.go:278] No container was found matching "kindnet"
	I0505 22:29:36.622803   66092 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0505 22:29:36.622865   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0505 22:29:36.662103   66092 cri.go:89] found id: ""
	I0505 22:29:36.662135   66092 logs.go:276] 0 containers: []
	W0505 22:29:36.662146   66092 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0505 22:29:36.662157   66092 logs.go:123] Gathering logs for kubelet ...
	I0505 22:29:36.662172   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 22:29:36.713983   66092 logs.go:123] Gathering logs for dmesg ...
	I0505 22:29:36.714020   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 22:29:36.732525   66092 logs.go:123] Gathering logs for describe nodes ...
	I0505 22:29:36.732557   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0505 22:29:36.820192   66092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0505 22:29:36.820216   66092 logs.go:123] Gathering logs for CRI-O ...
	I0505 22:29:36.820233   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0505 22:29:36.906282   66092 logs.go:123] Gathering logs for container status ...
	I0505 22:29:36.906322   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 22:29:39.454201   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:29:39.469684   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0505 22:29:39.469756   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0505 22:29:39.507602   66092 cri.go:89] found id: ""
	I0505 22:29:39.507627   66092 logs.go:276] 0 containers: []
	W0505 22:29:39.507634   66092 logs.go:278] No container was found matching "kube-apiserver"
	I0505 22:29:39.507639   66092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0505 22:29:39.507693   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0505 22:29:39.548181   66092 cri.go:89] found id: ""
	I0505 22:29:39.548209   66092 logs.go:276] 0 containers: []
	W0505 22:29:39.548216   66092 logs.go:278] No container was found matching "etcd"
	I0505 22:29:39.548224   66092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0505 22:29:39.548283   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0505 22:29:39.588107   66092 cri.go:89] found id: ""
	I0505 22:29:39.588132   66092 logs.go:276] 0 containers: []
	W0505 22:29:39.588140   66092 logs.go:278] No container was found matching "coredns"
	I0505 22:29:39.588146   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0505 22:29:39.588211   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0505 22:29:39.625810   66092 cri.go:89] found id: ""
	I0505 22:29:39.625839   66092 logs.go:276] 0 containers: []
	W0505 22:29:39.625850   66092 logs.go:278] No container was found matching "kube-scheduler"
	I0505 22:29:39.625857   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0505 22:29:39.625919   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0505 22:29:39.665740   66092 cri.go:89] found id: ""
	I0505 22:29:39.665768   66092 logs.go:276] 0 containers: []
	W0505 22:29:39.665779   66092 logs.go:278] No container was found matching "kube-proxy"
	I0505 22:29:39.665786   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0505 22:29:39.665843   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0505 22:29:39.709186   66092 cri.go:89] found id: ""
	I0505 22:29:39.709208   66092 logs.go:276] 0 containers: []
	W0505 22:29:39.709216   66092 logs.go:278] No container was found matching "kube-controller-manager"
	I0505 22:29:39.709221   66092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0505 22:29:39.709276   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0505 22:29:39.750178   66092 cri.go:89] found id: ""
	I0505 22:29:39.750205   66092 logs.go:276] 0 containers: []
	W0505 22:29:39.750217   66092 logs.go:278] No container was found matching "kindnet"
	I0505 22:29:39.750224   66092 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0505 22:29:39.750288   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0505 22:29:39.795840   66092 cri.go:89] found id: ""
	I0505 22:29:39.795872   66092 logs.go:276] 0 containers: []
	W0505 22:29:39.795885   66092 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0505 22:29:39.795896   66092 logs.go:123] Gathering logs for describe nodes ...
	I0505 22:29:39.795909   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0505 22:29:39.882395   66092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0505 22:29:39.882424   66092 logs.go:123] Gathering logs for CRI-O ...
	I0505 22:29:39.882441   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0505 22:29:39.964986   66092 logs.go:123] Gathering logs for container status ...
	I0505 22:29:39.965018   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 22:29:40.011163   66092 logs.go:123] Gathering logs for kubelet ...
	I0505 22:29:40.011199   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 22:29:40.065772   66092 logs.go:123] Gathering logs for dmesg ...
	I0505 22:29:40.065808   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 22:29:35.882261   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:29:37.882891   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:29:39.883194   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:29:41.696271   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:29:43.697655   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:29:42.583158   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:29:42.597487   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0505 22:29:42.597553   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0505 22:29:42.639606   66092 cri.go:89] found id: ""
	I0505 22:29:42.639632   66092 logs.go:276] 0 containers: []
	W0505 22:29:42.639639   66092 logs.go:278] No container was found matching "kube-apiserver"
	I0505 22:29:42.639645   66092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0505 22:29:42.639704   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0505 22:29:42.680543   66092 cri.go:89] found id: ""
	I0505 22:29:42.680575   66092 logs.go:276] 0 containers: []
	W0505 22:29:42.680586   66092 logs.go:278] No container was found matching "etcd"
	I0505 22:29:42.680593   66092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0505 22:29:42.680654   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0505 22:29:42.719091   66092 cri.go:89] found id: ""
	I0505 22:29:42.719120   66092 logs.go:276] 0 containers: []
	W0505 22:29:42.719128   66092 logs.go:278] No container was found matching "coredns"
	I0505 22:29:42.719134   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0505 22:29:42.719189   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0505 22:29:42.755058   66092 cri.go:89] found id: ""
	I0505 22:29:42.755080   66092 logs.go:276] 0 containers: []
	W0505 22:29:42.755088   66092 logs.go:278] No container was found matching "kube-scheduler"
	I0505 22:29:42.755093   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0505 22:29:42.755139   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0505 22:29:42.795044   66092 cri.go:89] found id: ""
	I0505 22:29:42.795068   66092 logs.go:276] 0 containers: []
	W0505 22:29:42.795078   66092 logs.go:278] No container was found matching "kube-proxy"
	I0505 22:29:42.795085   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0505 22:29:42.795149   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0505 22:29:42.844458   66092 cri.go:89] found id: ""
	I0505 22:29:42.844492   66092 logs.go:276] 0 containers: []
	W0505 22:29:42.844500   66092 logs.go:278] No container was found matching "kube-controller-manager"
	I0505 22:29:42.844505   66092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0505 22:29:42.844562   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0505 22:29:42.894097   66092 cri.go:89] found id: ""
	I0505 22:29:42.894129   66092 logs.go:276] 0 containers: []
	W0505 22:29:42.894140   66092 logs.go:278] No container was found matching "kindnet"
	I0505 22:29:42.894148   66092 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0505 22:29:42.894205   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0505 22:29:42.937593   66092 cri.go:89] found id: ""
	I0505 22:29:42.937626   66092 logs.go:276] 0 containers: []
	W0505 22:29:42.937634   66092 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0505 22:29:42.937642   66092 logs.go:123] Gathering logs for kubelet ...
	I0505 22:29:42.937654   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 22:29:42.999885   66092 logs.go:123] Gathering logs for dmesg ...
	I0505 22:29:42.999922   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 22:29:43.018422   66092 logs.go:123] Gathering logs for describe nodes ...
	I0505 22:29:43.018463   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0505 22:29:43.101120   66092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0505 22:29:43.101141   66092 logs.go:123] Gathering logs for CRI-O ...
	I0505 22:29:43.101153   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0505 22:29:43.175995   66092 logs.go:123] Gathering logs for container status ...
	I0505 22:29:43.176031   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 22:29:41.883419   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:29:44.382199   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:29:45.698036   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:29:48.199569   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:29:45.730544   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:29:45.746582   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0505 22:29:45.746659   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0505 22:29:45.792692   66092 cri.go:89] found id: ""
	I0505 22:29:45.792722   66092 logs.go:276] 0 containers: []
	W0505 22:29:45.792730   66092 logs.go:278] No container was found matching "kube-apiserver"
	I0505 22:29:45.792735   66092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0505 22:29:45.792799   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0505 22:29:45.835104   66092 cri.go:89] found id: ""
	I0505 22:29:45.835136   66092 logs.go:276] 0 containers: []
	W0505 22:29:45.835148   66092 logs.go:278] No container was found matching "etcd"
	I0505 22:29:45.835154   66092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0505 22:29:45.835213   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0505 22:29:45.875438   66092 cri.go:89] found id: ""
	I0505 22:29:45.875468   66092 logs.go:276] 0 containers: []
	W0505 22:29:45.875498   66092 logs.go:278] No container was found matching "coredns"
	I0505 22:29:45.875509   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0505 22:29:45.875572   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0505 22:29:45.912940   66092 cri.go:89] found id: ""
	I0505 22:29:45.912964   66092 logs.go:276] 0 containers: []
	W0505 22:29:45.912972   66092 logs.go:278] No container was found matching "kube-scheduler"
	I0505 22:29:45.912977   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0505 22:29:45.913026   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0505 22:29:45.951385   66092 cri.go:89] found id: ""
	I0505 22:29:45.951412   66092 logs.go:276] 0 containers: []
	W0505 22:29:45.951422   66092 logs.go:278] No container was found matching "kube-proxy"
	I0505 22:29:45.951435   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0505 22:29:45.951508   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0505 22:29:45.993243   66092 cri.go:89] found id: ""
	I0505 22:29:45.993272   66092 logs.go:276] 0 containers: []
	W0505 22:29:45.993281   66092 logs.go:278] No container was found matching "kube-controller-manager"
	I0505 22:29:45.993288   66092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0505 22:29:45.993355   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0505 22:29:46.033731   66092 cri.go:89] found id: ""
	I0505 22:29:46.033759   66092 logs.go:276] 0 containers: []
	W0505 22:29:46.033770   66092 logs.go:278] No container was found matching "kindnet"
	I0505 22:29:46.033777   66092 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0505 22:29:46.033836   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0505 22:29:46.070772   66092 cri.go:89] found id: ""
	I0505 22:29:46.070801   66092 logs.go:276] 0 containers: []
	W0505 22:29:46.070810   66092 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0505 22:29:46.070819   66092 logs.go:123] Gathering logs for CRI-O ...
	I0505 22:29:46.070835   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0505 22:29:46.156816   66092 logs.go:123] Gathering logs for container status ...
	I0505 22:29:46.156849   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 22:29:46.202195   66092 logs.go:123] Gathering logs for kubelet ...
	I0505 22:29:46.202220   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 22:29:46.254382   66092 logs.go:123] Gathering logs for dmesg ...
	I0505 22:29:46.254418   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 22:29:46.270604   66092 logs.go:123] Gathering logs for describe nodes ...
	I0505 22:29:46.270634   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0505 22:29:46.351721   66092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0505 22:29:48.852457   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:29:48.867451   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0505 22:29:48.867557   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0505 22:29:48.907558   66092 cri.go:89] found id: ""
	I0505 22:29:48.907590   66092 logs.go:276] 0 containers: []
	W0505 22:29:48.907601   66092 logs.go:278] No container was found matching "kube-apiserver"
	I0505 22:29:48.907608   66092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0505 22:29:48.907680   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0505 22:29:48.954025   66092 cri.go:89] found id: ""
	I0505 22:29:48.954063   66092 logs.go:276] 0 containers: []
	W0505 22:29:48.954075   66092 logs.go:278] No container was found matching "etcd"
	I0505 22:29:48.954082   66092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0505 22:29:48.954142   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0505 22:29:48.993508   66092 cri.go:89] found id: ""
	I0505 22:29:48.993547   66092 logs.go:276] 0 containers: []
	W0505 22:29:48.993558   66092 logs.go:278] No container was found matching "coredns"
	I0505 22:29:48.993565   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0505 22:29:48.993628   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0505 22:29:49.034563   66092 cri.go:89] found id: ""
	I0505 22:29:49.034589   66092 logs.go:276] 0 containers: []
	W0505 22:29:49.034600   66092 logs.go:278] No container was found matching "kube-scheduler"
	I0505 22:29:49.034607   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0505 22:29:49.034676   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0505 22:29:49.085645   66092 cri.go:89] found id: ""
	I0505 22:29:49.085667   66092 logs.go:276] 0 containers: []
	W0505 22:29:49.085674   66092 logs.go:278] No container was found matching "kube-proxy"
	I0505 22:29:49.085680   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0505 22:29:49.085728   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0505 22:29:49.131583   66092 cri.go:89] found id: ""
	I0505 22:29:49.131613   66092 logs.go:276] 0 containers: []
	W0505 22:29:49.131622   66092 logs.go:278] No container was found matching "kube-controller-manager"
	I0505 22:29:49.131627   66092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0505 22:29:49.131695   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0505 22:29:49.171564   66092 cri.go:89] found id: ""
	I0505 22:29:49.171594   66092 logs.go:276] 0 containers: []
	W0505 22:29:49.171605   66092 logs.go:278] No container was found matching "kindnet"
	I0505 22:29:49.171613   66092 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0505 22:29:49.171677   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0505 22:29:49.213105   66092 cri.go:89] found id: ""
	I0505 22:29:49.213126   66092 logs.go:276] 0 containers: []
	W0505 22:29:49.213134   66092 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0505 22:29:49.213141   66092 logs.go:123] Gathering logs for CRI-O ...
	I0505 22:29:49.213154   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0505 22:29:49.290306   66092 logs.go:123] Gathering logs for container status ...
	I0505 22:29:49.290340   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 22:29:49.335186   66092 logs.go:123] Gathering logs for kubelet ...
	I0505 22:29:49.335225   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 22:29:49.386790   66092 logs.go:123] Gathering logs for dmesg ...
	I0505 22:29:49.386818   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 22:29:49.403200   66092 logs.go:123] Gathering logs for describe nodes ...
	I0505 22:29:49.403228   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0505 22:29:49.481654   66092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0505 22:29:46.883788   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:29:49.382738   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:29:50.696759   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:29:52.698306   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:29:51.982713   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:29:51.997034   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0505 22:29:51.997106   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0505 22:29:52.033609   66092 cri.go:89] found id: ""
	I0505 22:29:52.033637   66092 logs.go:276] 0 containers: []
	W0505 22:29:52.033648   66092 logs.go:278] No container was found matching "kube-apiserver"
	I0505 22:29:52.033655   66092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0505 22:29:52.033706   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0505 22:29:52.073167   66092 cri.go:89] found id: ""
	I0505 22:29:52.073192   66092 logs.go:276] 0 containers: []
	W0505 22:29:52.073202   66092 logs.go:278] No container was found matching "etcd"
	I0505 22:29:52.073210   66092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0505 22:29:52.073269   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0505 22:29:52.114307   66092 cri.go:89] found id: ""
	I0505 22:29:52.114332   66092 logs.go:276] 0 containers: []
	W0505 22:29:52.114342   66092 logs.go:278] No container was found matching "coredns"
	I0505 22:29:52.114347   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0505 22:29:52.114394   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0505 22:29:52.160673   66092 cri.go:89] found id: ""
	I0505 22:29:52.160694   66092 logs.go:276] 0 containers: []
	W0505 22:29:52.160701   66092 logs.go:278] No container was found matching "kube-scheduler"
	I0505 22:29:52.160706   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0505 22:29:52.160753   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0505 22:29:52.203328   66092 cri.go:89] found id: ""
	I0505 22:29:52.203350   66092 logs.go:276] 0 containers: []
	W0505 22:29:52.203357   66092 logs.go:278] No container was found matching "kube-proxy"
	I0505 22:29:52.203363   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0505 22:29:52.203406   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0505 22:29:52.250668   66092 cri.go:89] found id: ""
	I0505 22:29:52.250698   66092 logs.go:276] 0 containers: []
	W0505 22:29:52.250708   66092 logs.go:278] No container was found matching "kube-controller-manager"
	I0505 22:29:52.250715   66092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0505 22:29:52.250778   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0505 22:29:52.299078   66092 cri.go:89] found id: ""
	I0505 22:29:52.299103   66092 logs.go:276] 0 containers: []
	W0505 22:29:52.299110   66092 logs.go:278] No container was found matching "kindnet"
	I0505 22:29:52.299115   66092 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0505 22:29:52.299174   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0505 22:29:52.343514   66092 cri.go:89] found id: ""
	I0505 22:29:52.343549   66092 logs.go:276] 0 containers: []
	W0505 22:29:52.343560   66092 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0505 22:29:52.343571   66092 logs.go:123] Gathering logs for CRI-O ...
	I0505 22:29:52.343586   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0505 22:29:52.418233   66092 logs.go:123] Gathering logs for container status ...
	I0505 22:29:52.418268   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 22:29:52.465042   66092 logs.go:123] Gathering logs for kubelet ...
	I0505 22:29:52.465074   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 22:29:52.516167   66092 logs.go:123] Gathering logs for dmesg ...
	I0505 22:29:52.516201   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 22:29:52.533759   66092 logs.go:123] Gathering logs for describe nodes ...
	I0505 22:29:52.533788   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0505 22:29:52.609733   66092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0505 22:29:51.386946   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:29:53.884286   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:29:55.197638   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:29:57.697022   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:29:59.698038   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:29:55.110609   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:29:55.124637   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0505 22:29:55.124705   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0505 22:29:55.161307   66092 cri.go:89] found id: ""
	I0505 22:29:55.161328   66092 logs.go:276] 0 containers: []
	W0505 22:29:55.161335   66092 logs.go:278] No container was found matching "kube-apiserver"
	I0505 22:29:55.161344   66092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0505 22:29:55.161392   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0505 22:29:55.197339   66092 cri.go:89] found id: ""
	I0505 22:29:55.197367   66092 logs.go:276] 0 containers: []
	W0505 22:29:55.197377   66092 logs.go:278] No container was found matching "etcd"
	I0505 22:29:55.197385   66092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0505 22:29:55.197430   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0505 22:29:55.241651   66092 cri.go:89] found id: ""
	I0505 22:29:55.241682   66092 logs.go:276] 0 containers: []
	W0505 22:29:55.241693   66092 logs.go:278] No container was found matching "coredns"
	I0505 22:29:55.241701   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0505 22:29:55.241760   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0505 22:29:55.279583   66092 cri.go:89] found id: ""
	I0505 22:29:55.279612   66092 logs.go:276] 0 containers: []
	W0505 22:29:55.279620   66092 logs.go:278] No container was found matching "kube-scheduler"
	I0505 22:29:55.279628   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0505 22:29:55.279690   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0505 22:29:55.318368   66092 cri.go:89] found id: ""
	I0505 22:29:55.318399   66092 logs.go:276] 0 containers: []
	W0505 22:29:55.318410   66092 logs.go:278] No container was found matching "kube-proxy"
	I0505 22:29:55.318417   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0505 22:29:55.318478   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0505 22:29:55.356095   66092 cri.go:89] found id: ""
	I0505 22:29:55.356120   66092 logs.go:276] 0 containers: []
	W0505 22:29:55.356130   66092 logs.go:278] No container was found matching "kube-controller-manager"
	I0505 22:29:55.356138   66092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0505 22:29:55.356201   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0505 22:29:55.401684   66092 cri.go:89] found id: ""
	I0505 22:29:55.401708   66092 logs.go:276] 0 containers: []
	W0505 22:29:55.401718   66092 logs.go:278] No container was found matching "kindnet"
	I0505 22:29:55.401725   66092 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0505 22:29:55.401783   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0505 22:29:55.442679   66092 cri.go:89] found id: ""
	I0505 22:29:55.442707   66092 logs.go:276] 0 containers: []
	W0505 22:29:55.442714   66092 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0505 22:29:55.442722   66092 logs.go:123] Gathering logs for kubelet ...
	I0505 22:29:55.442734   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 22:29:55.495733   66092 logs.go:123] Gathering logs for dmesg ...
	I0505 22:29:55.495761   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 22:29:55.510991   66092 logs.go:123] Gathering logs for describe nodes ...
	I0505 22:29:55.511016   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0505 22:29:55.596136   66092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0505 22:29:55.596158   66092 logs.go:123] Gathering logs for CRI-O ...
	I0505 22:29:55.596179   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0505 22:29:55.674512   66092 logs.go:123] Gathering logs for container status ...
	I0505 22:29:55.674546   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 22:29:58.219788   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:29:58.234114   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0505 22:29:58.234183   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0505 22:29:58.276683   66092 cri.go:89] found id: ""
	I0505 22:29:58.276709   66092 logs.go:276] 0 containers: []
	W0505 22:29:58.276716   66092 logs.go:278] No container was found matching "kube-apiserver"
	I0505 22:29:58.276721   66092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0505 22:29:58.276772   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0505 22:29:58.320058   66092 cri.go:89] found id: ""
	I0505 22:29:58.320087   66092 logs.go:276] 0 containers: []
	W0505 22:29:58.320099   66092 logs.go:278] No container was found matching "etcd"
	I0505 22:29:58.320106   66092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0505 22:29:58.320175   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0505 22:29:58.360703   66092 cri.go:89] found id: ""
	I0505 22:29:58.360727   66092 logs.go:276] 0 containers: []
	W0505 22:29:58.360735   66092 logs.go:278] No container was found matching "coredns"
	I0505 22:29:58.360740   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0505 22:29:58.360796   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0505 22:29:58.399190   66092 cri.go:89] found id: ""
	I0505 22:29:58.399213   66092 logs.go:276] 0 containers: []
	W0505 22:29:58.399221   66092 logs.go:278] No container was found matching "kube-scheduler"
	I0505 22:29:58.399226   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0505 22:29:58.399285   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0505 22:29:58.436007   66092 cri.go:89] found id: ""
	I0505 22:29:58.436036   66092 logs.go:276] 0 containers: []
	W0505 22:29:58.436046   66092 logs.go:278] No container was found matching "kube-proxy"
	I0505 22:29:58.436051   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0505 22:29:58.436117   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0505 22:29:58.472132   66092 cri.go:89] found id: ""
	I0505 22:29:58.472165   66092 logs.go:276] 0 containers: []
	W0505 22:29:58.472178   66092 logs.go:278] No container was found matching "kube-controller-manager"
	I0505 22:29:58.472186   66092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0505 22:29:58.472252   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0505 22:29:58.513944   66092 cri.go:89] found id: ""
	I0505 22:29:58.513978   66092 logs.go:276] 0 containers: []
	W0505 22:29:58.513989   66092 logs.go:278] No container was found matching "kindnet"
	I0505 22:29:58.513997   66092 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0505 22:29:58.514064   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0505 22:29:58.552838   66092 cri.go:89] found id: ""
	I0505 22:29:58.552863   66092 logs.go:276] 0 containers: []
	W0505 22:29:58.552870   66092 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0505 22:29:58.552878   66092 logs.go:123] Gathering logs for kubelet ...
	I0505 22:29:58.552890   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 22:29:58.604955   66092 logs.go:123] Gathering logs for dmesg ...
	I0505 22:29:58.604985   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 22:29:58.620841   66092 logs.go:123] Gathering logs for describe nodes ...
	I0505 22:29:58.620886   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0505 22:29:58.696441   66092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0505 22:29:58.696462   66092 logs.go:123] Gathering logs for CRI-O ...
	I0505 22:29:58.696476   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0505 22:29:58.773154   66092 logs.go:123] Gathering logs for container status ...
	I0505 22:29:58.773190   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 22:29:56.382715   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:29:58.383331   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:30:02.198338   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:30:04.696555   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:30:01.318413   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:30:01.334479   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0505 22:30:01.334548   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0505 22:30:01.378328   66092 cri.go:89] found id: ""
	I0505 22:30:01.378410   66092 logs.go:276] 0 containers: []
	W0505 22:30:01.378432   66092 logs.go:278] No container was found matching "kube-apiserver"
	I0505 22:30:01.378445   66092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0505 22:30:01.378511   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0505 22:30:01.417337   66092 cri.go:89] found id: ""
	I0505 22:30:01.417372   66092 logs.go:276] 0 containers: []
	W0505 22:30:01.417381   66092 logs.go:278] No container was found matching "etcd"
	I0505 22:30:01.417388   66092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0505 22:30:01.417447   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0505 22:30:01.458192   66092 cri.go:89] found id: ""
	I0505 22:30:01.458222   66092 logs.go:276] 0 containers: []
	W0505 22:30:01.458234   66092 logs.go:278] No container was found matching "coredns"
	I0505 22:30:01.458241   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0505 22:30:01.458340   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0505 22:30:01.497284   66092 cri.go:89] found id: ""
	I0505 22:30:01.497314   66092 logs.go:276] 0 containers: []
	W0505 22:30:01.497324   66092 logs.go:278] No container was found matching "kube-scheduler"
	I0505 22:30:01.497331   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0505 22:30:01.497392   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0505 22:30:01.534679   66092 cri.go:89] found id: ""
	I0505 22:30:01.534703   66092 logs.go:276] 0 containers: []
	W0505 22:30:01.534714   66092 logs.go:278] No container was found matching "kube-proxy"
	I0505 22:30:01.534722   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0505 22:30:01.534776   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0505 22:30:01.573819   66092 cri.go:89] found id: ""
	I0505 22:30:01.573848   66092 logs.go:276] 0 containers: []
	W0505 22:30:01.573858   66092 logs.go:278] No container was found matching "kube-controller-manager"
	I0505 22:30:01.573865   66092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0505 22:30:01.573925   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0505 22:30:01.617254   66092 cri.go:89] found id: ""
	I0505 22:30:01.617282   66092 logs.go:276] 0 containers: []
	W0505 22:30:01.617292   66092 logs.go:278] No container was found matching "kindnet"
	I0505 22:30:01.617299   66092 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0505 22:30:01.617347   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0505 22:30:01.662449   66092 cri.go:89] found id: ""
	I0505 22:30:01.662474   66092 logs.go:276] 0 containers: []
	W0505 22:30:01.662482   66092 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0505 22:30:01.662490   66092 logs.go:123] Gathering logs for describe nodes ...
	I0505 22:30:01.662500   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0505 22:30:01.746342   66092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0505 22:30:01.746368   66092 logs.go:123] Gathering logs for CRI-O ...
	I0505 22:30:01.746382   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0505 22:30:01.828702   66092 logs.go:123] Gathering logs for container status ...
	I0505 22:30:01.828746   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 22:30:01.876845   66092 logs.go:123] Gathering logs for kubelet ...
	I0505 22:30:01.876881   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 22:30:01.932272   66092 logs.go:123] Gathering logs for dmesg ...
	I0505 22:30:01.932332   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 22:30:04.448638   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:30:04.463264   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0505 22:30:04.463341   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0505 22:30:04.507436   66092 cri.go:89] found id: ""
	I0505 22:30:04.507468   66092 logs.go:276] 0 containers: []
	W0505 22:30:04.507494   66092 logs.go:278] No container was found matching "kube-apiserver"
	I0505 22:30:04.507501   66092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0505 22:30:04.507567   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0505 22:30:04.548296   66092 cri.go:89] found id: ""
	I0505 22:30:04.548328   66092 logs.go:276] 0 containers: []
	W0505 22:30:04.548341   66092 logs.go:278] No container was found matching "etcd"
	I0505 22:30:04.548348   66092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0505 22:30:04.548402   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0505 22:30:04.592030   66092 cri.go:89] found id: ""
	I0505 22:30:04.592058   66092 logs.go:276] 0 containers: []
	W0505 22:30:04.592067   66092 logs.go:278] No container was found matching "coredns"
	I0505 22:30:04.592074   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0505 22:30:04.592141   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0505 22:30:04.629714   66092 cri.go:89] found id: ""
	I0505 22:30:04.629742   66092 logs.go:276] 0 containers: []
	W0505 22:30:04.629753   66092 logs.go:278] No container was found matching "kube-scheduler"
	I0505 22:30:04.629761   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0505 22:30:04.629820   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0505 22:30:04.667943   66092 cri.go:89] found id: ""
	I0505 22:30:04.667974   66092 logs.go:276] 0 containers: []
	W0505 22:30:04.667983   66092 logs.go:278] No container was found matching "kube-proxy"
	I0505 22:30:04.667991   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0505 22:30:04.668057   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0505 22:30:04.710846   66092 cri.go:89] found id: ""
	I0505 22:30:04.710873   66092 logs.go:276] 0 containers: []
	W0505 22:30:04.710883   66092 logs.go:278] No container was found matching "kube-controller-manager"
	I0505 22:30:04.710890   66092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0505 22:30:04.710951   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0505 22:30:04.749285   66092 cri.go:89] found id: ""
	I0505 22:30:04.749314   66092 logs.go:276] 0 containers: []
	W0505 22:30:04.749321   66092 logs.go:278] No container was found matching "kindnet"
	I0505 22:30:04.749329   66092 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0505 22:30:04.749393   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0505 22:30:04.791229   66092 cri.go:89] found id: ""
	I0505 22:30:04.791252   66092 logs.go:276] 0 containers: []
	W0505 22:30:04.791260   66092 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0505 22:30:04.791268   66092 logs.go:123] Gathering logs for kubelet ...
	I0505 22:30:04.791284   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 22:30:04.843047   66092 logs.go:123] Gathering logs for dmesg ...
	I0505 22:30:04.843083   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 22:30:04.859560   66092 logs.go:123] Gathering logs for describe nodes ...
	I0505 22:30:04.859590   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0505 22:30:04.960399   66092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0505 22:30:04.960427   66092 logs.go:123] Gathering logs for CRI-O ...
	I0505 22:30:04.960444   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0505 22:30:05.046225   66092 logs.go:123] Gathering logs for container status ...
	I0505 22:30:05.046264   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 22:30:00.881952   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:30:03.384176   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:30:06.697628   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:30:09.197806   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:30:07.594295   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:30:07.609753   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0505 22:30:07.609821   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0505 22:30:07.655252   66092 cri.go:89] found id: ""
	I0505 22:30:07.655274   66092 logs.go:276] 0 containers: []
	W0505 22:30:07.655282   66092 logs.go:278] No container was found matching "kube-apiserver"
	I0505 22:30:07.655288   66092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0505 22:30:07.655338   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0505 22:30:07.697224   66092 cri.go:89] found id: ""
	I0505 22:30:07.697254   66092 logs.go:276] 0 containers: []
	W0505 22:30:07.697264   66092 logs.go:278] No container was found matching "etcd"
	I0505 22:30:07.697272   66092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0505 22:30:07.697333   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0505 22:30:07.737698   66092 cri.go:89] found id: ""
	I0505 22:30:07.737731   66092 logs.go:276] 0 containers: []
	W0505 22:30:07.737741   66092 logs.go:278] No container was found matching "coredns"
	I0505 22:30:07.737747   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0505 22:30:07.737816   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0505 22:30:07.778900   66092 cri.go:89] found id: ""
	I0505 22:30:07.778931   66092 logs.go:276] 0 containers: []
	W0505 22:30:07.778941   66092 logs.go:278] No container was found matching "kube-scheduler"
	I0505 22:30:07.778948   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0505 22:30:07.779009   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0505 22:30:07.820079   66092 cri.go:89] found id: ""
	I0505 22:30:07.820113   66092 logs.go:276] 0 containers: []
	W0505 22:30:07.820123   66092 logs.go:278] No container was found matching "kube-proxy"
	I0505 22:30:07.820131   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0505 22:30:07.820200   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0505 22:30:07.857591   66092 cri.go:89] found id: ""
	I0505 22:30:07.857617   66092 logs.go:276] 0 containers: []
	W0505 22:30:07.857635   66092 logs.go:278] No container was found matching "kube-controller-manager"
	I0505 22:30:07.857644   66092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0505 22:30:07.857711   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0505 22:30:07.903688   66092 cri.go:89] found id: ""
	I0505 22:30:07.903723   66092 logs.go:276] 0 containers: []
	W0505 22:30:07.903734   66092 logs.go:278] No container was found matching "kindnet"
	I0505 22:30:07.903748   66092 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0505 22:30:07.903810   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0505 22:30:07.946418   66092 cri.go:89] found id: ""
	I0505 22:30:07.946445   66092 logs.go:276] 0 containers: []
	W0505 22:30:07.946457   66092 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0505 22:30:07.946467   66092 logs.go:123] Gathering logs for describe nodes ...
	I0505 22:30:07.946484   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0505 22:30:08.024307   66092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0505 22:30:08.024333   66092 logs.go:123] Gathering logs for CRI-O ...
	I0505 22:30:08.024346   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0505 22:30:08.100959   66092 logs.go:123] Gathering logs for container status ...
	I0505 22:30:08.100992   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 22:30:08.150591   66092 logs.go:123] Gathering logs for kubelet ...
	I0505 22:30:08.150617   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 22:30:08.204101   66092 logs.go:123] Gathering logs for dmesg ...
	I0505 22:30:08.204134   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 22:30:05.882527   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:30:07.882703   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:30:09.883332   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:30:11.199021   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:30:13.199556   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:30:10.721306   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:30:10.737090   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0505 22:30:10.737172   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0505 22:30:10.777697   66092 cri.go:89] found id: ""
	I0505 22:30:10.777720   66092 logs.go:276] 0 containers: []
	W0505 22:30:10.777727   66092 logs.go:278] No container was found matching "kube-apiserver"
	I0505 22:30:10.777732   66092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0505 22:30:10.777789   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0505 22:30:10.821324   66092 cri.go:89] found id: ""
	I0505 22:30:10.821350   66092 logs.go:276] 0 containers: []
	W0505 22:30:10.821357   66092 logs.go:278] No container was found matching "etcd"
	I0505 22:30:10.821368   66092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0505 22:30:10.821429   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0505 22:30:10.861069   66092 cri.go:89] found id: ""
	I0505 22:30:10.861096   66092 logs.go:276] 0 containers: []
	W0505 22:30:10.861105   66092 logs.go:278] No container was found matching "coredns"
	I0505 22:30:10.861110   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0505 22:30:10.861156   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0505 22:30:10.913611   66092 cri.go:89] found id: ""
	I0505 22:30:10.913639   66092 logs.go:276] 0 containers: []
	W0505 22:30:10.913647   66092 logs.go:278] No container was found matching "kube-scheduler"
	I0505 22:30:10.913653   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0505 22:30:10.913717   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0505 22:30:10.966451   66092 cri.go:89] found id: ""
	I0505 22:30:10.966495   66092 logs.go:276] 0 containers: []
	W0505 22:30:10.966507   66092 logs.go:278] No container was found matching "kube-proxy"
	I0505 22:30:10.966514   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0505 22:30:10.966572   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0505 22:30:11.014182   66092 cri.go:89] found id: ""
	I0505 22:30:11.014210   66092 logs.go:276] 0 containers: []
	W0505 22:30:11.014224   66092 logs.go:278] No container was found matching "kube-controller-manager"
	I0505 22:30:11.014232   66092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0505 22:30:11.014293   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0505 22:30:11.052995   66092 cri.go:89] found id: ""
	I0505 22:30:11.053020   66092 logs.go:276] 0 containers: []
	W0505 22:30:11.053027   66092 logs.go:278] No container was found matching "kindnet"
	I0505 22:30:11.053032   66092 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0505 22:30:11.053077   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0505 22:30:11.090833   66092 cri.go:89] found id: ""
	I0505 22:30:11.090859   66092 logs.go:276] 0 containers: []
	W0505 22:30:11.090870   66092 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0505 22:30:11.090883   66092 logs.go:123] Gathering logs for describe nodes ...
	I0505 22:30:11.090898   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0505 22:30:11.166075   66092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0505 22:30:11.166100   66092 logs.go:123] Gathering logs for CRI-O ...
	I0505 22:30:11.166114   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0505 22:30:11.250288   66092 logs.go:123] Gathering logs for container status ...
	I0505 22:30:11.250326   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 22:30:11.304786   66092 logs.go:123] Gathering logs for kubelet ...
	I0505 22:30:11.304822   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 22:30:11.361383   66092 logs.go:123] Gathering logs for dmesg ...
	I0505 22:30:11.361431   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 22:30:13.879829   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:30:13.895931   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0505 22:30:13.896014   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0505 22:30:13.945173   66092 cri.go:89] found id: ""
	I0505 22:30:13.945199   66092 logs.go:276] 0 containers: []
	W0505 22:30:13.945210   66092 logs.go:278] No container was found matching "kube-apiserver"
	I0505 22:30:13.945217   66092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0505 22:30:13.945281   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0505 22:30:13.985923   66092 cri.go:89] found id: ""
	I0505 22:30:13.985958   66092 logs.go:276] 0 containers: []
	W0505 22:30:13.985971   66092 logs.go:278] No container was found matching "etcd"
	I0505 22:30:13.985978   66092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0505 22:30:13.986039   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0505 22:30:14.034147   66092 cri.go:89] found id: ""
	I0505 22:30:14.034176   66092 logs.go:276] 0 containers: []
	W0505 22:30:14.034186   66092 logs.go:278] No container was found matching "coredns"
	I0505 22:30:14.034191   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0505 22:30:14.034247   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0505 22:30:14.074569   66092 cri.go:89] found id: ""
	I0505 22:30:14.074598   66092 logs.go:276] 0 containers: []
	W0505 22:30:14.074605   66092 logs.go:278] No container was found matching "kube-scheduler"
	I0505 22:30:14.074611   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0505 22:30:14.074663   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0505 22:30:14.116630   66092 cri.go:89] found id: ""
	I0505 22:30:14.116660   66092 logs.go:276] 0 containers: []
	W0505 22:30:14.116672   66092 logs.go:278] No container was found matching "kube-proxy"
	I0505 22:30:14.116680   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0505 22:30:14.116741   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0505 22:30:14.158234   66092 cri.go:89] found id: ""
	I0505 22:30:14.158262   66092 logs.go:276] 0 containers: []
	W0505 22:30:14.158272   66092 logs.go:278] No container was found matching "kube-controller-manager"
	I0505 22:30:14.158288   66092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0505 22:30:14.158344   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0505 22:30:14.201727   66092 cri.go:89] found id: ""
	I0505 22:30:14.201752   66092 logs.go:276] 0 containers: []
	W0505 22:30:14.201762   66092 logs.go:278] No container was found matching "kindnet"
	I0505 22:30:14.201768   66092 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0505 22:30:14.201816   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0505 22:30:14.248265   66092 cri.go:89] found id: ""
	I0505 22:30:14.248290   66092 logs.go:276] 0 containers: []
	W0505 22:30:14.248306   66092 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0505 22:30:14.248316   66092 logs.go:123] Gathering logs for CRI-O ...
	I0505 22:30:14.248332   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0505 22:30:14.329334   66092 logs.go:123] Gathering logs for container status ...
	I0505 22:30:14.329377   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 22:30:14.375086   66092 logs.go:123] Gathering logs for kubelet ...
	I0505 22:30:14.375115   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 22:30:14.428126   66092 logs.go:123] Gathering logs for dmesg ...
	I0505 22:30:14.428164   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 22:30:14.447307   66092 logs.go:123] Gathering logs for describe nodes ...
	I0505 22:30:14.447341   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0505 22:30:14.523075   66092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0505 22:30:11.886519   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:30:14.382984   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:30:15.699324   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:30:17.699363   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:30:17.023347   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:30:17.041694   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0505 22:30:17.041768   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0505 22:30:17.090772   66092 cri.go:89] found id: ""
	I0505 22:30:17.090799   66092 logs.go:276] 0 containers: []
	W0505 22:30:17.090807   66092 logs.go:278] No container was found matching "kube-apiserver"
	I0505 22:30:17.090813   66092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0505 22:30:17.090872   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0505 22:30:17.134509   66092 cri.go:89] found id: ""
	I0505 22:30:17.134534   66092 logs.go:276] 0 containers: []
	W0505 22:30:17.134542   66092 logs.go:278] No container was found matching "etcd"
	I0505 22:30:17.134547   66092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0505 22:30:17.134605   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0505 22:30:17.178581   66092 cri.go:89] found id: ""
	I0505 22:30:17.178610   66092 logs.go:276] 0 containers: []
	W0505 22:30:17.178622   66092 logs.go:278] No container was found matching "coredns"
	I0505 22:30:17.178628   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0505 22:30:17.178691   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0505 22:30:17.229459   66092 cri.go:89] found id: ""
	I0505 22:30:17.229484   66092 logs.go:276] 0 containers: []
	W0505 22:30:17.229491   66092 logs.go:278] No container was found matching "kube-scheduler"
	I0505 22:30:17.229497   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0505 22:30:17.229556   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0505 22:30:17.274504   66092 cri.go:89] found id: ""
	I0505 22:30:17.274545   66092 logs.go:276] 0 containers: []
	W0505 22:30:17.274556   66092 logs.go:278] No container was found matching "kube-proxy"
	I0505 22:30:17.274563   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0505 22:30:17.274618   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0505 22:30:17.312557   66092 cri.go:89] found id: ""
	I0505 22:30:17.312587   66092 logs.go:276] 0 containers: []
	W0505 22:30:17.312597   66092 logs.go:278] No container was found matching "kube-controller-manager"
	I0505 22:30:17.312604   66092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0505 22:30:17.312664   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0505 22:30:17.357379   66092 cri.go:89] found id: ""
	I0505 22:30:17.357406   66092 logs.go:276] 0 containers: []
	W0505 22:30:17.357417   66092 logs.go:278] No container was found matching "kindnet"
	I0505 22:30:17.357425   66092 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0505 22:30:17.357487   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0505 22:30:17.400790   66092 cri.go:89] found id: ""
	I0505 22:30:17.400813   66092 logs.go:276] 0 containers: []
	W0505 22:30:17.400822   66092 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0505 22:30:17.400832   66092 logs.go:123] Gathering logs for kubelet ...
	I0505 22:30:17.400850   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 22:30:17.454582   66092 logs.go:123] Gathering logs for dmesg ...
	I0505 22:30:17.454620   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 22:30:17.471566   66092 logs.go:123] Gathering logs for describe nodes ...
	I0505 22:30:17.471598   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0505 22:30:17.554830   66092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0505 22:30:17.554854   66092 logs.go:123] Gathering logs for CRI-O ...
	I0505 22:30:17.554869   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0505 22:30:17.648514   66092 logs.go:123] Gathering logs for container status ...
	I0505 22:30:17.648560   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 22:30:16.881397   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:30:18.881827   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:30:20.197511   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:30:22.704066   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:30:20.218735   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:30:20.233363   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0505 22:30:20.233428   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0505 22:30:20.278538   66092 cri.go:89] found id: ""
	I0505 22:30:20.278575   66092 logs.go:276] 0 containers: []
	W0505 22:30:20.278588   66092 logs.go:278] No container was found matching "kube-apiserver"
	I0505 22:30:20.278597   66092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0505 22:30:20.278657   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0505 22:30:20.318554   66092 cri.go:89] found id: ""
	I0505 22:30:20.318578   66092 logs.go:276] 0 containers: []
	W0505 22:30:20.318586   66092 logs.go:278] No container was found matching "etcd"
	I0505 22:30:20.318591   66092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0505 22:30:20.318640   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0505 22:30:20.361977   66092 cri.go:89] found id: ""
	I0505 22:30:20.362011   66092 logs.go:276] 0 containers: []
	W0505 22:30:20.362024   66092 logs.go:278] No container was found matching "coredns"
	I0505 22:30:20.362033   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0505 22:30:20.362099   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0505 22:30:20.402301   66092 cri.go:89] found id: ""
	I0505 22:30:20.402328   66092 logs.go:276] 0 containers: []
	W0505 22:30:20.402335   66092 logs.go:278] No container was found matching "kube-scheduler"
	I0505 22:30:20.402341   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0505 22:30:20.402391   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0505 22:30:20.447067   66092 cri.go:89] found id: ""
	I0505 22:30:20.447104   66092 logs.go:276] 0 containers: []
	W0505 22:30:20.447118   66092 logs.go:278] No container was found matching "kube-proxy"
	I0505 22:30:20.447127   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0505 22:30:20.447202   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0505 22:30:20.491459   66092 cri.go:89] found id: ""
	I0505 22:30:20.491503   66092 logs.go:276] 0 containers: []
	W0505 22:30:20.491515   66092 logs.go:278] No container was found matching "kube-controller-manager"
	I0505 22:30:20.491523   66092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0505 22:30:20.491592   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0505 22:30:20.532986   66092 cri.go:89] found id: ""
	I0505 22:30:20.533009   66092 logs.go:276] 0 containers: []
	W0505 22:30:20.533016   66092 logs.go:278] No container was found matching "kindnet"
	I0505 22:30:20.533022   66092 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0505 22:30:20.533096   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0505 22:30:20.573492   66092 cri.go:89] found id: ""
	I0505 22:30:20.573523   66092 logs.go:276] 0 containers: []
	W0505 22:30:20.573531   66092 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0505 22:30:20.573539   66092 logs.go:123] Gathering logs for kubelet ...
	I0505 22:30:20.573551   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 22:30:20.631996   66092 logs.go:123] Gathering logs for dmesg ...
	I0505 22:30:20.632036   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 22:30:20.650959   66092 logs.go:123] Gathering logs for describe nodes ...
	I0505 22:30:20.650989   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0505 22:30:20.733696   66092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0505 22:30:20.733719   66092 logs.go:123] Gathering logs for CRI-O ...
	I0505 22:30:20.733733   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0505 22:30:20.818308   66092 logs.go:123] Gathering logs for container status ...
	I0505 22:30:20.818350   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 22:30:23.365410   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:30:23.389536   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0505 22:30:23.389589   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0505 22:30:23.461709   66092 cri.go:89] found id: ""
	I0505 22:30:23.461740   66092 logs.go:276] 0 containers: []
	W0505 22:30:23.461749   66092 logs.go:278] No container was found matching "kube-apiserver"
	I0505 22:30:23.461755   66092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0505 22:30:23.461817   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0505 22:30:23.530422   66092 cri.go:89] found id: ""
	I0505 22:30:23.530445   66092 logs.go:276] 0 containers: []
	W0505 22:30:23.530452   66092 logs.go:278] No container was found matching "etcd"
	I0505 22:30:23.530458   66092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0505 22:30:23.530513   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0505 22:30:23.577346   66092 cri.go:89] found id: ""
	I0505 22:30:23.577375   66092 logs.go:276] 0 containers: []
	W0505 22:30:23.577385   66092 logs.go:278] No container was found matching "coredns"
	I0505 22:30:23.577391   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0505 22:30:23.577464   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0505 22:30:23.621343   66092 cri.go:89] found id: ""
	I0505 22:30:23.621373   66092 logs.go:276] 0 containers: []
	W0505 22:30:23.621385   66092 logs.go:278] No container was found matching "kube-scheduler"
	I0505 22:30:23.621392   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0505 22:30:23.621458   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0505 22:30:23.660346   66092 cri.go:89] found id: ""
	I0505 22:30:23.660376   66092 logs.go:276] 0 containers: []
	W0505 22:30:23.660387   66092 logs.go:278] No container was found matching "kube-proxy"
	I0505 22:30:23.660393   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0505 22:30:23.660443   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0505 22:30:23.698191   66092 cri.go:89] found id: ""
	I0505 22:30:23.698218   66092 logs.go:276] 0 containers: []
	W0505 22:30:23.698228   66092 logs.go:278] No container was found matching "kube-controller-manager"
	I0505 22:30:23.698235   66092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0505 22:30:23.698293   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0505 22:30:23.740795   66092 cri.go:89] found id: ""
	I0505 22:30:23.740819   66092 logs.go:276] 0 containers: []
	W0505 22:30:23.740826   66092 logs.go:278] No container was found matching "kindnet"
	I0505 22:30:23.740831   66092 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0505 22:30:23.740884   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0505 22:30:23.782552   66092 cri.go:89] found id: ""
	I0505 22:30:23.782580   66092 logs.go:276] 0 containers: []
	W0505 22:30:23.782591   66092 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0505 22:30:23.782602   66092 logs.go:123] Gathering logs for kubelet ...
	I0505 22:30:23.782617   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 22:30:23.832898   66092 logs.go:123] Gathering logs for dmesg ...
	I0505 22:30:23.832940   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 22:30:23.848629   66092 logs.go:123] Gathering logs for describe nodes ...
	I0505 22:30:23.848663   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0505 22:30:23.940243   66092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0505 22:30:23.940265   66092 logs.go:123] Gathering logs for CRI-O ...
	I0505 22:30:23.940279   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0505 22:30:24.022042   66092 logs.go:123] Gathering logs for container status ...
	I0505 22:30:24.022088   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 22:30:21.382760   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:30:23.384654   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:30:25.196713   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:30:27.697233   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:30:29.698094   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:30:26.569980   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:30:26.585936   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0505 22:30:26.586002   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0505 22:30:26.625540   66092 cri.go:89] found id: ""
	I0505 22:30:26.625577   66092 logs.go:276] 0 containers: []
	W0505 22:30:26.625586   66092 logs.go:278] No container was found matching "kube-apiserver"
	I0505 22:30:26.625591   66092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0505 22:30:26.625643   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0505 22:30:26.668678   66092 cri.go:89] found id: ""
	I0505 22:30:26.668708   66092 logs.go:276] 0 containers: []
	W0505 22:30:26.668720   66092 logs.go:278] No container was found matching "etcd"
	I0505 22:30:26.668727   66092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0505 22:30:26.668794   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0505 22:30:26.711511   66092 cri.go:89] found id: ""
	I0505 22:30:26.711536   66092 logs.go:276] 0 containers: []
	W0505 22:30:26.711544   66092 logs.go:278] No container was found matching "coredns"
	I0505 22:30:26.711550   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0505 22:30:26.711607   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0505 22:30:26.753518   66092 cri.go:89] found id: ""
	I0505 22:30:26.753544   66092 logs.go:276] 0 containers: []
	W0505 22:30:26.753552   66092 logs.go:278] No container was found matching "kube-scheduler"
	I0505 22:30:26.753557   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0505 22:30:26.753607   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0505 22:30:26.798349   66092 cri.go:89] found id: ""
	I0505 22:30:26.798375   66092 logs.go:276] 0 containers: []
	W0505 22:30:26.798381   66092 logs.go:278] No container was found matching "kube-proxy"
	I0505 22:30:26.798387   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0505 22:30:26.798454   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0505 22:30:26.840599   66092 cri.go:89] found id: ""
	I0505 22:30:26.840630   66092 logs.go:276] 0 containers: []
	W0505 22:30:26.840642   66092 logs.go:278] No container was found matching "kube-controller-manager"
	I0505 22:30:26.840650   66092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0505 22:30:26.840703   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0505 22:30:26.884316   66092 cri.go:89] found id: ""
	I0505 22:30:26.884340   66092 logs.go:276] 0 containers: []
	W0505 22:30:26.884350   66092 logs.go:278] No container was found matching "kindnet"
	I0505 22:30:26.884357   66092 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0505 22:30:26.884417   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0505 22:30:26.927116   66092 cri.go:89] found id: ""
	I0505 22:30:26.927139   66092 logs.go:276] 0 containers: []
	W0505 22:30:26.927147   66092 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0505 22:30:26.927155   66092 logs.go:123] Gathering logs for CRI-O ...
	I0505 22:30:26.927167   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0505 22:30:27.004962   66092 logs.go:123] Gathering logs for container status ...
	I0505 22:30:27.004998   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 22:30:27.052353   66092 logs.go:123] Gathering logs for kubelet ...
	I0505 22:30:27.052387   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 22:30:27.104759   66092 logs.go:123] Gathering logs for dmesg ...
	I0505 22:30:27.104791   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 22:30:27.120386   66092 logs.go:123] Gathering logs for describe nodes ...
	I0505 22:30:27.120416   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0505 22:30:27.194739   66092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0505 22:30:29.695809   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:30:29.711265   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0505 22:30:29.711325   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0505 22:30:29.750863   66092 cri.go:89] found id: ""
	I0505 22:30:29.750894   66092 logs.go:276] 0 containers: []
	W0505 22:30:29.750902   66092 logs.go:278] No container was found matching "kube-apiserver"
	I0505 22:30:29.750908   66092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0505 22:30:29.750976   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0505 22:30:29.795751   66092 cri.go:89] found id: ""
	I0505 22:30:29.795781   66092 logs.go:276] 0 containers: []
	W0505 22:30:29.795791   66092 logs.go:278] No container was found matching "etcd"
	I0505 22:30:29.795798   66092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0505 22:30:29.795863   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0505 22:30:29.839601   66092 cri.go:89] found id: ""
	I0505 22:30:29.839631   66092 logs.go:276] 0 containers: []
	W0505 22:30:29.839639   66092 logs.go:278] No container was found matching "coredns"
	I0505 22:30:29.839644   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0505 22:30:29.839691   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0505 22:30:29.880855   66092 cri.go:89] found id: ""
	I0505 22:30:29.880881   66092 logs.go:276] 0 containers: []
	W0505 22:30:29.880890   66092 logs.go:278] No container was found matching "kube-scheduler"
	I0505 22:30:29.880896   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0505 22:30:29.880947   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0505 22:30:29.926448   66092 cri.go:89] found id: ""
	I0505 22:30:29.926473   66092 logs.go:276] 0 containers: []
	W0505 22:30:29.926484   66092 logs.go:278] No container was found matching "kube-proxy"
	I0505 22:30:29.926497   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0505 22:30:29.926573   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0505 22:30:29.966832   66092 cri.go:89] found id: ""
	I0505 22:30:29.966872   66092 logs.go:276] 0 containers: []
	W0505 22:30:29.966883   66092 logs.go:278] No container was found matching "kube-controller-manager"
	I0505 22:30:29.966891   66092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0505 22:30:29.966952   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0505 22:30:30.013350   66092 cri.go:89] found id: ""
	I0505 22:30:30.013374   66092 logs.go:276] 0 containers: []
	W0505 22:30:30.013382   66092 logs.go:278] No container was found matching "kindnet"
	I0505 22:30:30.013387   66092 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0505 22:30:30.013446   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0505 22:30:30.055685   66092 cri.go:89] found id: ""
	I0505 22:30:30.055716   66092 logs.go:276] 0 containers: []
	W0505 22:30:30.055728   66092 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0505 22:30:30.055740   66092 logs.go:123] Gathering logs for kubelet ...
	I0505 22:30:30.055757   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 22:30:25.881906   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:30:27.882405   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:30:30.381731   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:30:32.198356   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:30:34.705483   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:30:30.111197   66092 logs.go:123] Gathering logs for dmesg ...
	I0505 22:30:30.111231   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 22:30:30.127433   66092 logs.go:123] Gathering logs for describe nodes ...
	I0505 22:30:30.127461   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0505 22:30:30.206792   66092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0505 22:30:30.206813   66092 logs.go:123] Gathering logs for CRI-O ...
	I0505 22:30:30.206825   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0505 22:30:30.292884   66092 logs.go:123] Gathering logs for container status ...
	I0505 22:30:30.292917   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 22:30:32.834011   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:30:32.849116   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0505 22:30:32.849189   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0505 22:30:32.890106   66092 cri.go:89] found id: ""
	I0505 22:30:32.890132   66092 logs.go:276] 0 containers: []
	W0505 22:30:32.890144   66092 logs.go:278] No container was found matching "kube-apiserver"
	I0505 22:30:32.890151   66092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0505 22:30:32.890217   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0505 22:30:32.931797   66092 cri.go:89] found id: ""
	I0505 22:30:32.931818   66092 logs.go:276] 0 containers: []
	W0505 22:30:32.931826   66092 logs.go:278] No container was found matching "etcd"
	I0505 22:30:32.931830   66092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0505 22:30:32.931882   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0505 22:30:32.971283   66092 cri.go:89] found id: ""
	I0505 22:30:32.971310   66092 logs.go:276] 0 containers: []
	W0505 22:30:32.971320   66092 logs.go:278] No container was found matching "coredns"
	I0505 22:30:32.971326   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0505 22:30:32.971387   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0505 22:30:33.010369   66092 cri.go:89] found id: ""
	I0505 22:30:33.010397   66092 logs.go:276] 0 containers: []
	W0505 22:30:33.010407   66092 logs.go:278] No container was found matching "kube-scheduler"
	I0505 22:30:33.010413   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0505 22:30:33.010465   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0505 22:30:33.048158   66092 cri.go:89] found id: ""
	I0505 22:30:33.048187   66092 logs.go:276] 0 containers: []
	W0505 22:30:33.048197   66092 logs.go:278] No container was found matching "kube-proxy"
	I0505 22:30:33.048204   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0505 22:30:33.048267   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0505 22:30:33.091862   66092 cri.go:89] found id: ""
	I0505 22:30:33.091902   66092 logs.go:276] 0 containers: []
	W0505 22:30:33.091915   66092 logs.go:278] No container was found matching "kube-controller-manager"
	I0505 22:30:33.091924   66092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0505 22:30:33.091988   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0505 22:30:33.129212   66092 cri.go:89] found id: ""
	I0505 22:30:33.129244   66092 logs.go:276] 0 containers: []
	W0505 22:30:33.129255   66092 logs.go:278] No container was found matching "kindnet"
	I0505 22:30:33.129262   66092 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0505 22:30:33.129327   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0505 22:30:33.172082   66092 cri.go:89] found id: ""
	I0505 22:30:33.172112   66092 logs.go:276] 0 containers: []
	W0505 22:30:33.172124   66092 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0505 22:30:33.172136   66092 logs.go:123] Gathering logs for CRI-O ...
	I0505 22:30:33.172153   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0505 22:30:33.254664   66092 logs.go:123] Gathering logs for container status ...
	I0505 22:30:33.254704   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 22:30:33.303536   66092 logs.go:123] Gathering logs for kubelet ...
	I0505 22:30:33.303569   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 22:30:33.356784   66092 logs.go:123] Gathering logs for dmesg ...
	I0505 22:30:33.356814   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 22:30:33.371448   66092 logs.go:123] Gathering logs for describe nodes ...
	I0505 22:30:33.371499   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0505 22:30:33.455981   66092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0505 22:30:32.384861   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:30:34.881198   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:30:37.197677   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:30:39.197956   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:30:35.956445   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:30:35.971553   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0505 22:30:35.971629   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0505 22:30:36.008042   66092 cri.go:89] found id: ""
	I0505 22:30:36.008072   66092 logs.go:276] 0 containers: []
	W0505 22:30:36.008082   66092 logs.go:278] No container was found matching "kube-apiserver"
	I0505 22:30:36.008089   66092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0505 22:30:36.008156   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0505 22:30:36.055751   66092 cri.go:89] found id: ""
	I0505 22:30:36.055779   66092 logs.go:276] 0 containers: []
	W0505 22:30:36.055787   66092 logs.go:278] No container was found matching "etcd"
	I0505 22:30:36.055793   66092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0505 22:30:36.055863   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0505 22:30:36.095712   66092 cri.go:89] found id: ""
	I0505 22:30:36.095741   66092 logs.go:276] 0 containers: []
	W0505 22:30:36.095751   66092 logs.go:278] No container was found matching "coredns"
	I0505 22:30:36.095759   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0505 22:30:36.095820   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0505 22:30:36.135144   66092 cri.go:89] found id: ""
	I0505 22:30:36.135172   66092 logs.go:276] 0 containers: []
	W0505 22:30:36.135180   66092 logs.go:278] No container was found matching "kube-scheduler"
	I0505 22:30:36.135193   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0505 22:30:36.135251   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0505 22:30:36.174909   66092 cri.go:89] found id: ""
	I0505 22:30:36.174937   66092 logs.go:276] 0 containers: []
	W0505 22:30:36.174944   66092 logs.go:278] No container was found matching "kube-proxy"
	I0505 22:30:36.174954   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0505 22:30:36.175009   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0505 22:30:36.212572   66092 cri.go:89] found id: ""
	I0505 22:30:36.212598   66092 logs.go:276] 0 containers: []
	W0505 22:30:36.212609   66092 logs.go:278] No container was found matching "kube-controller-manager"
	I0505 22:30:36.212616   66092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0505 22:30:36.212673   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0505 22:30:36.248651   66092 cri.go:89] found id: ""
	I0505 22:30:36.248687   66092 logs.go:276] 0 containers: []
	W0505 22:30:36.248696   66092 logs.go:278] No container was found matching "kindnet"
	I0505 22:30:36.248702   66092 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0505 22:30:36.248758   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0505 22:30:36.290282   66092 cri.go:89] found id: ""
	I0505 22:30:36.290305   66092 logs.go:276] 0 containers: []
	W0505 22:30:36.290313   66092 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0505 22:30:36.290321   66092 logs.go:123] Gathering logs for kubelet ...
	I0505 22:30:36.290331   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 22:30:36.341060   66092 logs.go:123] Gathering logs for dmesg ...
	I0505 22:30:36.341104   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 22:30:36.357084   66092 logs.go:123] Gathering logs for describe nodes ...
	I0505 22:30:36.357119   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0505 22:30:36.440091   66092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0505 22:30:36.440114   66092 logs.go:123] Gathering logs for CRI-O ...
	I0505 22:30:36.440130   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0505 22:30:36.516736   66092 logs.go:123] Gathering logs for container status ...
	I0505 22:30:36.516770   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 22:30:39.063165   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:30:39.077902   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0505 22:30:39.077978   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0505 22:30:39.116426   66092 cri.go:89] found id: ""
	I0505 22:30:39.116452   66092 logs.go:276] 0 containers: []
	W0505 22:30:39.116463   66092 logs.go:278] No container was found matching "kube-apiserver"
	I0505 22:30:39.116470   66092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0505 22:30:39.116528   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0505 22:30:39.159032   66092 cri.go:89] found id: ""
	I0505 22:30:39.159064   66092 logs.go:276] 0 containers: []
	W0505 22:30:39.159078   66092 logs.go:278] No container was found matching "etcd"
	I0505 22:30:39.159086   66092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0505 22:30:39.159147   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0505 22:30:39.205801   66092 cri.go:89] found id: ""
	I0505 22:30:39.205830   66092 logs.go:276] 0 containers: []
	W0505 22:30:39.205843   66092 logs.go:278] No container was found matching "coredns"
	I0505 22:30:39.205851   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0505 22:30:39.205909   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0505 22:30:39.244056   66092 cri.go:89] found id: ""
	I0505 22:30:39.244089   66092 logs.go:276] 0 containers: []
	W0505 22:30:39.244101   66092 logs.go:278] No container was found matching "kube-scheduler"
	I0505 22:30:39.244108   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0505 22:30:39.244173   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0505 22:30:39.284337   66092 cri.go:89] found id: ""
	I0505 22:30:39.284368   66092 logs.go:276] 0 containers: []
	W0505 22:30:39.284379   66092 logs.go:278] No container was found matching "kube-proxy"
	I0505 22:30:39.284386   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0505 22:30:39.284455   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0505 22:30:39.324181   66092 cri.go:89] found id: ""
	I0505 22:30:39.324207   66092 logs.go:276] 0 containers: []
	W0505 22:30:39.324214   66092 logs.go:278] No container was found matching "kube-controller-manager"
	I0505 22:30:39.324220   66092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0505 22:30:39.324298   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0505 22:30:39.362758   66092 cri.go:89] found id: ""
	I0505 22:30:39.362783   66092 logs.go:276] 0 containers: []
	W0505 22:30:39.362791   66092 logs.go:278] No container was found matching "kindnet"
	I0505 22:30:39.362796   66092 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0505 22:30:39.362845   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0505 22:30:39.403809   66092 cri.go:89] found id: ""
	I0505 22:30:39.403837   66092 logs.go:276] 0 containers: []
	W0505 22:30:39.403844   66092 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0505 22:30:39.403853   66092 logs.go:123] Gathering logs for CRI-O ...
	I0505 22:30:39.403863   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0505 22:30:39.483945   66092 logs.go:123] Gathering logs for container status ...
	I0505 22:30:39.483978   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 22:30:39.531720   66092 logs.go:123] Gathering logs for kubelet ...
	I0505 22:30:39.531758   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 22:30:39.587935   66092 logs.go:123] Gathering logs for dmesg ...
	I0505 22:30:39.587964   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 22:30:39.602471   66092 logs.go:123] Gathering logs for describe nodes ...
	I0505 22:30:39.602502   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0505 22:30:39.676836   66092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0505 22:30:36.881862   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:30:38.882939   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:30:41.697289   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:30:44.197573   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:30:42.177041   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:30:42.192323   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0505 22:30:42.192389   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0505 22:30:42.235084   66092 cri.go:89] found id: ""
	I0505 22:30:42.235122   66092 logs.go:276] 0 containers: []
	W0505 22:30:42.235134   66092 logs.go:278] No container was found matching "kube-apiserver"
	I0505 22:30:42.235142   66092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0505 22:30:42.235206   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0505 22:30:42.276328   66092 cri.go:89] found id: ""
	I0505 22:30:42.276356   66092 logs.go:276] 0 containers: []
	W0505 22:30:42.276368   66092 logs.go:278] No container was found matching "etcd"
	I0505 22:30:42.276382   66092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0505 22:30:42.276448   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0505 22:30:42.318890   66092 cri.go:89] found id: ""
	I0505 22:30:42.318923   66092 logs.go:276] 0 containers: []
	W0505 22:30:42.318934   66092 logs.go:278] No container was found matching "coredns"
	I0505 22:30:42.318941   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0505 22:30:42.318991   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0505 22:30:42.359769   66092 cri.go:89] found id: ""
	I0505 22:30:42.359800   66092 logs.go:276] 0 containers: []
	W0505 22:30:42.359808   66092 logs.go:278] No container was found matching "kube-scheduler"
	I0505 22:30:42.359813   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0505 22:30:42.359872   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0505 22:30:42.397922   66092 cri.go:89] found id: ""
	I0505 22:30:42.397951   66092 logs.go:276] 0 containers: []
	W0505 22:30:42.397963   66092 logs.go:278] No container was found matching "kube-proxy"
	I0505 22:30:42.397972   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0505 22:30:42.398040   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0505 22:30:42.440831   66092 cri.go:89] found id: ""
	I0505 22:30:42.440859   66092 logs.go:276] 0 containers: []
	W0505 22:30:42.440877   66092 logs.go:278] No container was found matching "kube-controller-manager"
	I0505 22:30:42.440885   66092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0505 22:30:42.440949   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0505 22:30:42.478422   66092 cri.go:89] found id: ""
	I0505 22:30:42.478446   66092 logs.go:276] 0 containers: []
	W0505 22:30:42.478454   66092 logs.go:278] No container was found matching "kindnet"
	I0505 22:30:42.478459   66092 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0505 22:30:42.478510   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0505 22:30:42.524495   66092 cri.go:89] found id: ""
	I0505 22:30:42.524526   66092 logs.go:276] 0 containers: []
	W0505 22:30:42.524535   66092 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0505 22:30:42.524544   66092 logs.go:123] Gathering logs for kubelet ...
	I0505 22:30:42.524554   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 22:30:42.579948   66092 logs.go:123] Gathering logs for dmesg ...
	I0505 22:30:42.579985   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 22:30:42.595968   66092 logs.go:123] Gathering logs for describe nodes ...
	I0505 22:30:42.595995   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0505 22:30:42.667739   66092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0505 22:30:42.667756   66092 logs.go:123] Gathering logs for CRI-O ...
	I0505 22:30:42.667768   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0505 22:30:42.757792   66092 logs.go:123] Gathering logs for container status ...
	I0505 22:30:42.757839   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 22:30:40.883101   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:30:43.383156   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:30:46.697873   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:30:48.698432   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:30:45.301004   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:30:45.315390   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0505 22:30:45.315463   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0505 22:30:45.353505   66092 cri.go:89] found id: ""
	I0505 22:30:45.353532   66092 logs.go:276] 0 containers: []
	W0505 22:30:45.353539   66092 logs.go:278] No container was found matching "kube-apiserver"
	I0505 22:30:45.353545   66092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0505 22:30:45.353598   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0505 22:30:45.393388   66092 cri.go:89] found id: ""
	I0505 22:30:45.393413   66092 logs.go:276] 0 containers: []
	W0505 22:30:45.393421   66092 logs.go:278] No container was found matching "etcd"
	I0505 22:30:45.393425   66092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0505 22:30:45.393479   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0505 22:30:45.430198   66092 cri.go:89] found id: ""
	I0505 22:30:45.430222   66092 logs.go:276] 0 containers: []
	W0505 22:30:45.430234   66092 logs.go:278] No container was found matching "coredns"
	I0505 22:30:45.430242   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0505 22:30:45.430295   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0505 22:30:45.467866   66092 cri.go:89] found id: ""
	I0505 22:30:45.467893   66092 logs.go:276] 0 containers: []
	W0505 22:30:45.467903   66092 logs.go:278] No container was found matching "kube-scheduler"
	I0505 22:30:45.467910   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0505 22:30:45.467972   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0505 22:30:45.508537   66092 cri.go:89] found id: ""
	I0505 22:30:45.508562   66092 logs.go:276] 0 containers: []
	W0505 22:30:45.508570   66092 logs.go:278] No container was found matching "kube-proxy"
	I0505 22:30:45.508575   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0505 22:30:45.508637   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0505 22:30:45.547795   66092 cri.go:89] found id: ""
	I0505 22:30:45.547828   66092 logs.go:276] 0 containers: []
	W0505 22:30:45.547839   66092 logs.go:278] No container was found matching "kube-controller-manager"
	I0505 22:30:45.547847   66092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0505 22:30:45.547925   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0505 22:30:45.585707   66092 cri.go:89] found id: ""
	I0505 22:30:45.585733   66092 logs.go:276] 0 containers: []
	W0505 22:30:45.585745   66092 logs.go:278] No container was found matching "kindnet"
	I0505 22:30:45.585752   66092 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0505 22:30:45.585806   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0505 22:30:45.624779   66092 cri.go:89] found id: ""
	I0505 22:30:45.624806   66092 logs.go:276] 0 containers: []
	W0505 22:30:45.624815   66092 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0505 22:30:45.624824   66092 logs.go:123] Gathering logs for kubelet ...
	I0505 22:30:45.624835   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 22:30:45.680560   66092 logs.go:123] Gathering logs for dmesg ...
	I0505 22:30:45.680592   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 22:30:45.696835   66092 logs.go:123] Gathering logs for describe nodes ...
	I0505 22:30:45.696870   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0505 22:30:45.770928   66092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0505 22:30:45.770949   66092 logs.go:123] Gathering logs for CRI-O ...
	I0505 22:30:45.770967   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0505 22:30:45.856063   66092 logs.go:123] Gathering logs for container status ...
	I0505 22:30:45.856099   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 22:30:48.425730   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:30:48.441184   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0505 22:30:48.441241   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0505 22:30:48.478986   66092 cri.go:89] found id: ""
	I0505 22:30:48.479010   66092 logs.go:276] 0 containers: []
	W0505 22:30:48.479018   66092 logs.go:278] No container was found matching "kube-apiserver"
	I0505 22:30:48.479023   66092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0505 22:30:48.479072   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0505 22:30:48.516660   66092 cri.go:89] found id: ""
	I0505 22:30:48.516690   66092 logs.go:276] 0 containers: []
	W0505 22:30:48.516699   66092 logs.go:278] No container was found matching "etcd"
	I0505 22:30:48.516705   66092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0505 22:30:48.516764   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0505 22:30:48.556161   66092 cri.go:89] found id: ""
	I0505 22:30:48.556191   66092 logs.go:276] 0 containers: []
	W0505 22:30:48.556202   66092 logs.go:278] No container was found matching "coredns"
	I0505 22:30:48.556208   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0505 22:30:48.556273   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0505 22:30:48.596676   66092 cri.go:89] found id: ""
	I0505 22:30:48.596704   66092 logs.go:276] 0 containers: []
	W0505 22:30:48.596714   66092 logs.go:278] No container was found matching "kube-scheduler"
	I0505 22:30:48.596723   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0505 22:30:48.596788   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0505 22:30:48.643579   66092 cri.go:89] found id: ""
	I0505 22:30:48.643604   66092 logs.go:276] 0 containers: []
	W0505 22:30:48.643612   66092 logs.go:278] No container was found matching "kube-proxy"
	I0505 22:30:48.643616   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0505 22:30:48.643670   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0505 22:30:48.682802   66092 cri.go:89] found id: ""
	I0505 22:30:48.682828   66092 logs.go:276] 0 containers: []
	W0505 22:30:48.682835   66092 logs.go:278] No container was found matching "kube-controller-manager"
	I0505 22:30:48.682840   66092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0505 22:30:48.682890   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0505 22:30:48.723696   66092 cri.go:89] found id: ""
	I0505 22:30:48.723726   66092 logs.go:276] 0 containers: []
	W0505 22:30:48.723746   66092 logs.go:278] No container was found matching "kindnet"
	I0505 22:30:48.723752   66092 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0505 22:30:48.723819   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0505 22:30:48.764197   66092 cri.go:89] found id: ""
	I0505 22:30:48.764229   66092 logs.go:276] 0 containers: []
	W0505 22:30:48.764249   66092 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0505 22:30:48.764262   66092 logs.go:123] Gathering logs for container status ...
	I0505 22:30:48.764278   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 22:30:48.819355   66092 logs.go:123] Gathering logs for kubelet ...
	I0505 22:30:48.819384   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 22:30:48.875467   66092 logs.go:123] Gathering logs for dmesg ...
	I0505 22:30:48.875520   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 22:30:48.891428   66092 logs.go:123] Gathering logs for describe nodes ...
	I0505 22:30:48.891460   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0505 22:30:48.972342   66092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0505 22:30:48.972368   66092 logs.go:123] Gathering logs for CRI-O ...
	I0505 22:30:48.972387   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0505 22:30:45.883210   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:30:48.382483   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:30:50.383388   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:30:50.698770   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:30:53.197577   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:30:51.556390   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:30:51.571215   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0505 22:30:51.571290   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0505 22:30:51.609550   66092 cri.go:89] found id: ""
	I0505 22:30:51.609581   66092 logs.go:276] 0 containers: []
	W0505 22:30:51.609592   66092 logs.go:278] No container was found matching "kube-apiserver"
	I0505 22:30:51.609599   66092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0505 22:30:51.609656   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0505 22:30:51.658311   66092 cri.go:89] found id: ""
	I0505 22:30:51.658339   66092 logs.go:276] 0 containers: []
	W0505 22:30:51.658348   66092 logs.go:278] No container was found matching "etcd"
	I0505 22:30:51.658353   66092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0505 22:30:51.658416   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0505 22:30:51.718379   66092 cri.go:89] found id: ""
	I0505 22:30:51.718404   66092 logs.go:276] 0 containers: []
	W0505 22:30:51.718412   66092 logs.go:278] No container was found matching "coredns"
	I0505 22:30:51.718417   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0505 22:30:51.718466   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0505 22:30:51.767770   66092 cri.go:89] found id: ""
	I0505 22:30:51.767795   66092 logs.go:276] 0 containers: []
	W0505 22:30:51.767803   66092 logs.go:278] No container was found matching "kube-scheduler"
	I0505 22:30:51.767808   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0505 22:30:51.767855   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0505 22:30:51.811534   66092 cri.go:89] found id: ""
	I0505 22:30:51.811561   66092 logs.go:276] 0 containers: []
	W0505 22:30:51.811569   66092 logs.go:278] No container was found matching "kube-proxy"
	I0505 22:30:51.811575   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0505 22:30:51.811640   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0505 22:30:51.856908   66092 cri.go:89] found id: ""
	I0505 22:30:51.856938   66092 logs.go:276] 0 containers: []
	W0505 22:30:51.856948   66092 logs.go:278] No container was found matching "kube-controller-manager"
	I0505 22:30:51.856955   66092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0505 22:30:51.857026   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0505 22:30:51.897596   66092 cri.go:89] found id: ""
	I0505 22:30:51.897627   66092 logs.go:276] 0 containers: []
	W0505 22:30:51.897638   66092 logs.go:278] No container was found matching "kindnet"
	I0505 22:30:51.897648   66092 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0505 22:30:51.897713   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0505 22:30:51.938220   66092 cri.go:89] found id: ""
	I0505 22:30:51.938264   66092 logs.go:276] 0 containers: []
	W0505 22:30:51.938287   66092 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0505 22:30:51.938300   66092 logs.go:123] Gathering logs for container status ...
	I0505 22:30:51.938321   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 22:30:51.981953   66092 logs.go:123] Gathering logs for kubelet ...
	I0505 22:30:51.981982   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 22:30:52.036641   66092 logs.go:123] Gathering logs for dmesg ...
	I0505 22:30:52.036677   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 22:30:52.052329   66092 logs.go:123] Gathering logs for describe nodes ...
	I0505 22:30:52.052360   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0505 22:30:52.138666   66092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0505 22:30:52.138690   66092 logs.go:123] Gathering logs for CRI-O ...
	I0505 22:30:52.138701   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0505 22:30:54.718954   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:30:54.734699   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0505 22:30:54.734758   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0505 22:30:54.776312   66092 cri.go:89] found id: ""
	I0505 22:30:54.776335   66092 logs.go:276] 0 containers: []
	W0505 22:30:54.776344   66092 logs.go:278] No container was found matching "kube-apiserver"
	I0505 22:30:54.776349   66092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0505 22:30:54.776399   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0505 22:30:54.816360   66092 cri.go:89] found id: ""
	I0505 22:30:54.816396   66092 logs.go:276] 0 containers: []
	W0505 22:30:54.816411   66092 logs.go:278] No container was found matching "etcd"
	I0505 22:30:54.816419   66092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0505 22:30:54.816483   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0505 22:30:54.866160   66092 cri.go:89] found id: ""
	I0505 22:30:54.866191   66092 logs.go:276] 0 containers: []
	W0505 22:30:54.866202   66092 logs.go:278] No container was found matching "coredns"
	I0505 22:30:54.866210   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0505 22:30:54.866272   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0505 22:30:54.909608   66092 cri.go:89] found id: ""
	I0505 22:30:54.909636   66092 logs.go:276] 0 containers: []
	W0505 22:30:54.909646   66092 logs.go:278] No container was found matching "kube-scheduler"
	I0505 22:30:54.909653   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0505 22:30:54.909714   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0505 22:30:54.951353   66092 cri.go:89] found id: ""
	I0505 22:30:54.951384   66092 logs.go:276] 0 containers: []
	W0505 22:30:54.951394   66092 logs.go:278] No container was found matching "kube-proxy"
	I0505 22:30:54.951401   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0505 22:30:54.951467   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0505 22:30:54.996695   66092 cri.go:89] found id: ""
	I0505 22:30:54.996729   66092 logs.go:276] 0 containers: []
	W0505 22:30:54.996738   66092 logs.go:278] No container was found matching "kube-controller-manager"
	I0505 22:30:54.996745   66092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0505 22:30:54.996809   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0505 22:30:55.035553   66092 cri.go:89] found id: ""
	I0505 22:30:55.035583   66092 logs.go:276] 0 containers: []
	W0505 22:30:55.035592   66092 logs.go:278] No container was found matching "kindnet"
	I0505 22:30:55.035599   66092 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0505 22:30:55.035669   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0505 22:30:55.079234   66092 cri.go:89] found id: ""
	I0505 22:30:55.079260   66092 logs.go:276] 0 containers: []
	W0505 22:30:55.079268   66092 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0505 22:30:55.079276   66092 logs.go:123] Gathering logs for CRI-O ...
	I0505 22:30:55.079288   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0505 22:30:52.881712   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:30:54.886866   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:30:55.198665   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:30:57.697570   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:30:55.163960   66092 logs.go:123] Gathering logs for container status ...
	I0505 22:30:55.164002   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 22:30:55.211990   66092 logs.go:123] Gathering logs for kubelet ...
	I0505 22:30:55.212023   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 22:30:55.270364   66092 logs.go:123] Gathering logs for dmesg ...
	I0505 22:30:55.270417   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 22:30:55.286423   66092 logs.go:123] Gathering logs for describe nodes ...
	I0505 22:30:55.286508   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0505 22:30:55.377164   66092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0505 22:30:57.878184   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:30:57.896809   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0505 22:30:57.896891   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0505 22:30:57.947267   66092 cri.go:89] found id: ""
	I0505 22:30:57.947295   66092 logs.go:276] 0 containers: []
	W0505 22:30:57.947304   66092 logs.go:278] No container was found matching "kube-apiserver"
	I0505 22:30:57.947309   66092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0505 22:30:57.947363   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0505 22:30:57.992453   66092 cri.go:89] found id: ""
	I0505 22:30:57.992493   66092 logs.go:276] 0 containers: []
	W0505 22:30:57.992504   66092 logs.go:278] No container was found matching "etcd"
	I0505 22:30:57.992511   66092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0505 22:30:57.992579   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0505 22:30:58.030952   66092 cri.go:89] found id: ""
	I0505 22:30:58.030986   66092 logs.go:276] 0 containers: []
	W0505 22:30:58.030998   66092 logs.go:278] No container was found matching "coredns"
	I0505 22:30:58.031006   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0505 22:30:58.031070   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0505 22:30:58.074417   66092 cri.go:89] found id: ""
	I0505 22:30:58.074453   66092 logs.go:276] 0 containers: []
	W0505 22:30:58.074464   66092 logs.go:278] No container was found matching "kube-scheduler"
	I0505 22:30:58.074472   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0505 22:30:58.074531   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0505 22:30:58.113141   66092 cri.go:89] found id: ""
	I0505 22:30:58.113174   66092 logs.go:276] 0 containers: []
	W0505 22:30:58.113183   66092 logs.go:278] No container was found matching "kube-proxy"
	I0505 22:30:58.113188   66092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0505 22:30:58.113246   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0505 22:30:58.152567   66092 cri.go:89] found id: ""
	I0505 22:30:58.152595   66092 logs.go:276] 0 containers: []
	W0505 22:30:58.152604   66092 logs.go:278] No container was found matching "kube-controller-manager"
	I0505 22:30:58.152609   66092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0505 22:30:58.152665   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0505 22:30:58.190600   66092 cri.go:89] found id: ""
	I0505 22:30:58.190620   66092 logs.go:276] 0 containers: []
	W0505 22:30:58.190628   66092 logs.go:278] No container was found matching "kindnet"
	I0505 22:30:58.190634   66092 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0505 22:30:58.190709   66092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0505 22:30:58.234686   66092 cri.go:89] found id: ""
	I0505 22:30:58.234715   66092 logs.go:276] 0 containers: []
	W0505 22:30:58.234726   66092 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0505 22:30:58.234738   66092 logs.go:123] Gathering logs for kubelet ...
	I0505 22:30:58.234756   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 22:30:58.299144   66092 logs.go:123] Gathering logs for dmesg ...
	I0505 22:30:58.299195   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 22:30:58.324045   66092 logs.go:123] Gathering logs for describe nodes ...
	I0505 22:30:58.324081   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0505 22:30:58.439200   66092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0505 22:30:58.439227   66092 logs.go:123] Gathering logs for CRI-O ...
	I0505 22:30:58.439243   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0505 22:30:58.524354   66092 logs.go:123] Gathering logs for container status ...
	I0505 22:30:58.524389   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 22:30:57.383941   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:30:59.882631   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:31:01.074099   66092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 22:31:01.091393   66092 kubeadm.go:591] duration metric: took 4m3.576859072s to restartPrimaryControlPlane
	W0505 22:31:01.091466   66092 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0505 22:31:01.091515   66092 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0505 22:31:03.466204   66092 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.374667015s)
	I0505 22:31:03.466276   66092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 22:31:03.484420   66092 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0505 22:31:03.496964   66092 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0505 22:31:03.510272   66092 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0505 22:31:03.510300   66092 kubeadm.go:156] found existing configuration files:
	
	I0505 22:31:03.510369   66092 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0505 22:31:03.521901   66092 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0505 22:31:03.521965   66092 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0505 22:31:03.534046   66092 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0505 22:31:03.546358   66092 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0505 22:31:03.546414   66092 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0505 22:31:03.558203   66092 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0505 22:31:03.570740   66092 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0505 22:31:03.570800   66092 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0505 22:31:03.582857   66092 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0505 22:31:03.594448   66092 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0505 22:31:03.594524   66092 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0505 22:31:03.606496   66092 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0505 22:31:03.686167   66092 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0505 22:31:03.686289   66092 kubeadm.go:309] [preflight] Running pre-flight checks
	I0505 22:31:03.864906   66092 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0505 22:31:03.865156   66092 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0505 22:31:03.865292   66092 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0505 22:31:04.093756   66092 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0505 22:31:04.096405   66092 out.go:204]   - Generating certificates and keys ...
	I0505 22:31:04.096504   66092 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0505 22:31:04.096596   66092 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0505 22:31:04.096725   66092 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0505 22:31:04.096808   66092 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0505 22:31:04.096928   66092 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0505 22:31:04.097007   66092 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0505 22:31:04.097099   66092 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0505 22:31:04.097209   66092 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0505 22:31:04.097468   66092 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0505 22:31:04.097940   66092 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0505 22:31:04.098026   66092 kubeadm.go:309] [certs] Using the existing "sa" key
	I0505 22:31:04.098130   66092 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0505 22:31:04.218875   66092 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0505 22:31:04.373410   66092 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0505 22:31:04.475519   66092 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0505 22:31:04.646490   66092 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0505 22:31:04.668070   66092 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0505 22:31:04.669366   66092 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0505 22:31:04.669411   66092 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0505 22:31:04.838561   66092 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0505 22:31:00.198605   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:31:02.696717   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:31:04.698883   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:31:04.840498   66092 out.go:204]   - Booting up control plane ...
	I0505 22:31:04.840634   66092 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0505 22:31:04.849228   66092 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0505 22:31:04.850307   66092 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0505 22:31:04.851172   66092 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0505 22:31:04.853579   66092 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0505 22:31:02.382957   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:31:04.384332   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:31:06.699249   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:31:08.699694   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:31:06.883286   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:31:09.382578   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:31:11.197226   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:31:13.199243   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:31:11.383669   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:31:13.882336   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:31:15.696216   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:31:17.696816   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:31:19.697700   66662 pod_ready.go:102] pod "metrics-server-569cc877fc-qwd2z" in "kube-system" namespace has status "Ready":"False"
	I0505 22:31:16.382808   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:31:18.884605   65347 pod_ready.go:102] pod "metrics-server-569cc877fc-hhggh" in "kube-system" namespace has status "Ready":"False"
	I0505 22:31:19.971226   61991 kubeadm.go:309] [api-check] The API server is not healthy after 4m0.000427361s
	I0505 22:31:19.971270   61991 kubeadm.go:309] 
	I0505 22:31:19.971352   61991 kubeadm.go:309] Unfortunately, an error has occurred:
	I0505 22:31:19.971407   61991 kubeadm.go:309] 	context deadline exceeded
	I0505 22:31:19.971415   61991 kubeadm.go:309] 
	I0505 22:31:19.971443   61991 kubeadm.go:309] This error is likely caused by:
	I0505 22:31:19.971474   61991 kubeadm.go:309] 	- The kubelet is not running
	I0505 22:31:19.971611   61991 kubeadm.go:309] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0505 22:31:19.971631   61991 kubeadm.go:309] 
	I0505 22:31:19.971781   61991 kubeadm.go:309] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0505 22:31:19.971855   61991 kubeadm.go:309] 	- 'systemctl status kubelet'
	I0505 22:31:19.971893   61991 kubeadm.go:309] 	- 'journalctl -xeu kubelet'
	I0505 22:31:19.971904   61991 kubeadm.go:309] 
	I0505 22:31:19.972071   61991 kubeadm.go:309] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0505 22:31:19.972195   61991 kubeadm.go:309] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0505 22:31:19.972298   61991 kubeadm.go:309] Here is one example how you may list all running Kubernetes containers by using crictl:
	I0505 22:31:19.972428   61991 kubeadm.go:309] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0505 22:31:19.972490   61991 kubeadm.go:309] 	Once you have found the failing container, you can inspect its logs with:
	I0505 22:31:19.972561   61991 kubeadm.go:309] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I0505 22:31:19.974001   61991 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0505 22:31:19.974123   61991 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0505 22:31:19.974210   61991 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0505 22:31:19.974295   61991 kubeadm.go:393] duration metric: took 12m21.820340827s to StartCluster
	I0505 22:31:19.974339   61991 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0505 22:31:19.974399   61991 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0505 22:31:20.026620   61991 cri.go:89] found id: "864cd85c736e2ce457e1a525229a8d3ca9664b69d7693f3ff59d25492879dafc"
	I0505 22:31:20.026649   61991 cri.go:89] found id: ""
	I0505 22:31:20.026658   61991 logs.go:276] 1 containers: [864cd85c736e2ce457e1a525229a8d3ca9664b69d7693f3ff59d25492879dafc]
	I0505 22:31:20.026716   61991 ssh_runner.go:195] Run: which crictl
	I0505 22:31:20.033034   61991 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0505 22:31:20.033123   61991 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0505 22:31:20.082547   61991 cri.go:89] found id: ""
	I0505 22:31:20.082569   61991 logs.go:276] 0 containers: []
	W0505 22:31:20.082576   61991 logs.go:278] No container was found matching "etcd"
	I0505 22:31:20.082580   61991 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0505 22:31:20.082628   61991 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0505 22:31:20.129263   61991 cri.go:89] found id: ""
	I0505 22:31:20.129295   61991 logs.go:276] 0 containers: []
	W0505 22:31:20.129306   61991 logs.go:278] No container was found matching "coredns"
	I0505 22:31:20.129314   61991 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0505 22:31:20.129376   61991 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0505 22:31:20.167917   61991 cri.go:89] found id: "323a7ed1b232823638de9954fec12d12c568e620f2ab4dca89733a236cf09834"
	I0505 22:31:20.167941   61991 cri.go:89] found id: ""
	I0505 22:31:20.167948   61991 logs.go:276] 1 containers: [323a7ed1b232823638de9954fec12d12c568e620f2ab4dca89733a236cf09834]
	I0505 22:31:20.167995   61991 ssh_runner.go:195] Run: which crictl
	I0505 22:31:20.172858   61991 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0505 22:31:20.172928   61991 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0505 22:31:20.214583   61991 cri.go:89] found id: ""
	I0505 22:31:20.214612   61991 logs.go:276] 0 containers: []
	W0505 22:31:20.214622   61991 logs.go:278] No container was found matching "kube-proxy"
	I0505 22:31:20.214629   61991 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0505 22:31:20.214690   61991 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0505 22:31:20.260136   61991 cri.go:89] found id: "5b85663be5c61802d694fb290950fbd164bf2ba5d8e1892b340f29a7d0c9286c"
	I0505 22:31:20.260171   61991 cri.go:89] found id: ""
	I0505 22:31:20.260181   61991 logs.go:276] 1 containers: [5b85663be5c61802d694fb290950fbd164bf2ba5d8e1892b340f29a7d0c9286c]
	I0505 22:31:20.260238   61991 ssh_runner.go:195] Run: which crictl
	I0505 22:31:20.266129   61991 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0505 22:31:20.266206   61991 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0505 22:31:20.310403   61991 cri.go:89] found id: ""
	I0505 22:31:20.310431   61991 logs.go:276] 0 containers: []
	W0505 22:31:20.310442   61991 logs.go:278] No container was found matching "kindnet"
	I0505 22:31:20.310449   61991 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0505 22:31:20.310512   61991 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0505 22:31:20.359128   61991 cri.go:89] found id: ""
	I0505 22:31:20.359160   61991 logs.go:276] 0 containers: []
	W0505 22:31:20.359167   61991 logs.go:278] No container was found matching "storage-provisioner"
	I0505 22:31:20.359176   61991 logs.go:123] Gathering logs for container status ...
	I0505 22:31:20.359187   61991 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0505 22:31:20.406907   61991 logs.go:123] Gathering logs for kubelet ...
	I0505 22:31:20.406937   61991 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0505 22:31:20.538840   61991 logs.go:123] Gathering logs for dmesg ...
	I0505 22:31:20.538879   61991 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0505 22:31:20.556890   61991 logs.go:123] Gathering logs for describe nodes ...
	I0505 22:31:20.556917   61991 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0505 22:31:20.647955   61991 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0505 22:31:20.647976   61991 logs.go:123] Gathering logs for kube-apiserver [864cd85c736e2ce457e1a525229a8d3ca9664b69d7693f3ff59d25492879dafc] ...
	I0505 22:31:20.647987   61991 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 864cd85c736e2ce457e1a525229a8d3ca9664b69d7693f3ff59d25492879dafc"
	I0505 22:31:20.693172   61991 logs.go:123] Gathering logs for kube-scheduler [323a7ed1b232823638de9954fec12d12c568e620f2ab4dca89733a236cf09834] ...
	I0505 22:31:20.693203   61991 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 323a7ed1b232823638de9954fec12d12c568e620f2ab4dca89733a236cf09834"
	I0505 22:31:20.776554   61991 logs.go:123] Gathering logs for kube-controller-manager [5b85663be5c61802d694fb290950fbd164bf2ba5d8e1892b340f29a7d0c9286c] ...
	I0505 22:31:20.776590   61991 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b85663be5c61802d694fb290950fbd164bf2ba5d8e1892b340f29a7d0c9286c"
	I0505 22:31:20.824727   61991 logs.go:123] Gathering logs for CRI-O ...
	I0505 22:31:20.824758   61991 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0505 22:31:21.058611   61991 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.30.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.002425636s
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.000427361s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0505 22:31:21.058665   61991 out.go:239] * 
	W0505 22:31:21.058718   61991 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.30.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.002425636s
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.000427361s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0505 22:31:21.058744   61991 out.go:239] * 
	W0505 22:31:21.059589   61991 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0505 22:31:21.062458   61991 out.go:177] 
	W0505 22:31:21.063669   61991 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.30.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.002425636s
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.000427361s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0505 22:31:21.063713   61991 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0505 22:31:21.063738   61991 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0505 22:31:21.065192   61991 out.go:177] 
	
	
	==> CRI-O <==
	May 05 22:31:22 kubernetes-upgrade-131082 crio[3207]: time="2024-05-05 22:31:22.199222635Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714948282199189047,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=17c1cfe9-33b5-4cbd-9148-27e6f14e8c9a name=/runtime.v1.ImageService/ImageFsInfo
	May 05 22:31:22 kubernetes-upgrade-131082 crio[3207]: time="2024-05-05 22:31:22.200138086Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=66e1f554-08bd-439e-a8cc-b889bee526ac name=/runtime.v1.RuntimeService/ListContainers
	May 05 22:31:22 kubernetes-upgrade-131082 crio[3207]: time="2024-05-05 22:31:22.200195275Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=66e1f554-08bd-439e-a8cc-b889bee526ac name=/runtime.v1.RuntimeService/ListContainers
	May 05 22:31:22 kubernetes-upgrade-131082 crio[3207]: time="2024-05-05 22:31:22.200302372Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5b85663be5c61802d694fb290950fbd164bf2ba5d8e1892b340f29a7d0c9286c,PodSandboxId:7d8c627828f6062c2295abd8a940982d787689862473784191f6886ab131433e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:15,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714948216537568797,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-131082,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 897ecc740d330fdc21cae6128629b57d,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.c
ontainer.restartCount: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:864cd85c736e2ce457e1a525229a8d3ca9664b69d7693f3ff59d25492879dafc,PodSandboxId:a6a2bc2fa54f510162aa1fd0b01110c489814d419889d9e2526a810a40d84851,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714948208534727067,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-131082,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35204b9fd48f4ff6cc00f4442205e894,},Annotations:map[string]string{io.kubernetes.container.hash: 77cf3021,io.kubernetes.conta
iner.restartCount: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:323a7ed1b232823638de9954fec12d12c568e620f2ab4dca89733a236cf09834,PodSandboxId:bdc81cb715f155e730e5f49c5f107692af4f03cfbe3d421dd0c6c7b4c30511b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714948041370716497,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-131082,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a105ed86a5fc74d177b03edc1cd6113,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container
.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=66e1f554-08bd-439e-a8cc-b889bee526ac name=/runtime.v1.RuntimeService/ListContainers
	May 05 22:31:22 kubernetes-upgrade-131082 crio[3207]: time="2024-05-05 22:31:22.241523530Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d9d8e267-921e-4908-8fdb-374fd4d0c175 name=/runtime.v1.RuntimeService/Version
	May 05 22:31:22 kubernetes-upgrade-131082 crio[3207]: time="2024-05-05 22:31:22.241701760Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d9d8e267-921e-4908-8fdb-374fd4d0c175 name=/runtime.v1.RuntimeService/Version
	May 05 22:31:22 kubernetes-upgrade-131082 crio[3207]: time="2024-05-05 22:31:22.243464718Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7af5f000-e4a8-48e2-b282-7800e80edc5b name=/runtime.v1.ImageService/ImageFsInfo
	May 05 22:31:22 kubernetes-upgrade-131082 crio[3207]: time="2024-05-05 22:31:22.244009342Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714948282243978570,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7af5f000-e4a8-48e2-b282-7800e80edc5b name=/runtime.v1.ImageService/ImageFsInfo
	May 05 22:31:22 kubernetes-upgrade-131082 crio[3207]: time="2024-05-05 22:31:22.244841476Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4b2263de-f70a-4eaf-a2ad-853f21501ad5 name=/runtime.v1.RuntimeService/ListContainers
	May 05 22:31:22 kubernetes-upgrade-131082 crio[3207]: time="2024-05-05 22:31:22.244927293Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4b2263de-f70a-4eaf-a2ad-853f21501ad5 name=/runtime.v1.RuntimeService/ListContainers
	May 05 22:31:22 kubernetes-upgrade-131082 crio[3207]: time="2024-05-05 22:31:22.245030802Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5b85663be5c61802d694fb290950fbd164bf2ba5d8e1892b340f29a7d0c9286c,PodSandboxId:7d8c627828f6062c2295abd8a940982d787689862473784191f6886ab131433e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:15,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714948216537568797,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-131082,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 897ecc740d330fdc21cae6128629b57d,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.c
ontainer.restartCount: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:864cd85c736e2ce457e1a525229a8d3ca9664b69d7693f3ff59d25492879dafc,PodSandboxId:a6a2bc2fa54f510162aa1fd0b01110c489814d419889d9e2526a810a40d84851,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714948208534727067,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-131082,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35204b9fd48f4ff6cc00f4442205e894,},Annotations:map[string]string{io.kubernetes.container.hash: 77cf3021,io.kubernetes.conta
iner.restartCount: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:323a7ed1b232823638de9954fec12d12c568e620f2ab4dca89733a236cf09834,PodSandboxId:bdc81cb715f155e730e5f49c5f107692af4f03cfbe3d421dd0c6c7b4c30511b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714948041370716497,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-131082,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a105ed86a5fc74d177b03edc1cd6113,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container
.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4b2263de-f70a-4eaf-a2ad-853f21501ad5 name=/runtime.v1.RuntimeService/ListContainers
	May 05 22:31:22 kubernetes-upgrade-131082 crio[3207]: time="2024-05-05 22:31:22.289958037Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=56db0886-b812-4ed0-86f0-833cf6dae896 name=/runtime.v1.RuntimeService/Version
	May 05 22:31:22 kubernetes-upgrade-131082 crio[3207]: time="2024-05-05 22:31:22.290055533Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=56db0886-b812-4ed0-86f0-833cf6dae896 name=/runtime.v1.RuntimeService/Version
	May 05 22:31:22 kubernetes-upgrade-131082 crio[3207]: time="2024-05-05 22:31:22.292018586Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=621681e1-935d-4770-a579-0a921c97fc0b name=/runtime.v1.ImageService/ImageFsInfo
	May 05 22:31:22 kubernetes-upgrade-131082 crio[3207]: time="2024-05-05 22:31:22.292382984Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714948282292355854,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=621681e1-935d-4770-a579-0a921c97fc0b name=/runtime.v1.ImageService/ImageFsInfo
	May 05 22:31:22 kubernetes-upgrade-131082 crio[3207]: time="2024-05-05 22:31:22.293100876Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5aa3ee27-17dc-4b80-af7f-80f10970ab75 name=/runtime.v1.RuntimeService/ListContainers
	May 05 22:31:22 kubernetes-upgrade-131082 crio[3207]: time="2024-05-05 22:31:22.293151673Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5aa3ee27-17dc-4b80-af7f-80f10970ab75 name=/runtime.v1.RuntimeService/ListContainers
	May 05 22:31:22 kubernetes-upgrade-131082 crio[3207]: time="2024-05-05 22:31:22.293245334Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5b85663be5c61802d694fb290950fbd164bf2ba5d8e1892b340f29a7d0c9286c,PodSandboxId:7d8c627828f6062c2295abd8a940982d787689862473784191f6886ab131433e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:15,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714948216537568797,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-131082,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 897ecc740d330fdc21cae6128629b57d,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.c
ontainer.restartCount: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:864cd85c736e2ce457e1a525229a8d3ca9664b69d7693f3ff59d25492879dafc,PodSandboxId:a6a2bc2fa54f510162aa1fd0b01110c489814d419889d9e2526a810a40d84851,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714948208534727067,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-131082,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35204b9fd48f4ff6cc00f4442205e894,},Annotations:map[string]string{io.kubernetes.container.hash: 77cf3021,io.kubernetes.conta
iner.restartCount: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:323a7ed1b232823638de9954fec12d12c568e620f2ab4dca89733a236cf09834,PodSandboxId:bdc81cb715f155e730e5f49c5f107692af4f03cfbe3d421dd0c6c7b4c30511b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714948041370716497,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-131082,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a105ed86a5fc74d177b03edc1cd6113,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container
.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5aa3ee27-17dc-4b80-af7f-80f10970ab75 name=/runtime.v1.RuntimeService/ListContainers
	May 05 22:31:22 kubernetes-upgrade-131082 crio[3207]: time="2024-05-05 22:31:22.334473385Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=41afa0f8-d452-44a3-945e-18f8e64abf20 name=/runtime.v1.RuntimeService/Version
	May 05 22:31:22 kubernetes-upgrade-131082 crio[3207]: time="2024-05-05 22:31:22.334546400Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=41afa0f8-d452-44a3-945e-18f8e64abf20 name=/runtime.v1.RuntimeService/Version
	May 05 22:31:22 kubernetes-upgrade-131082 crio[3207]: time="2024-05-05 22:31:22.336392891Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b44fe891-b000-47da-a716-87d08fe70e18 name=/runtime.v1.ImageService/ImageFsInfo
	May 05 22:31:22 kubernetes-upgrade-131082 crio[3207]: time="2024-05-05 22:31:22.336895523Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714948282336867094,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b44fe891-b000-47da-a716-87d08fe70e18 name=/runtime.v1.ImageService/ImageFsInfo
	May 05 22:31:22 kubernetes-upgrade-131082 crio[3207]: time="2024-05-05 22:31:22.337443622Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=07d84739-67cd-47a7-9321-d2604d72cbb3 name=/runtime.v1.RuntimeService/ListContainers
	May 05 22:31:22 kubernetes-upgrade-131082 crio[3207]: time="2024-05-05 22:31:22.337543567Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=07d84739-67cd-47a7-9321-d2604d72cbb3 name=/runtime.v1.RuntimeService/ListContainers
	May 05 22:31:22 kubernetes-upgrade-131082 crio[3207]: time="2024-05-05 22:31:22.337717215Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5b85663be5c61802d694fb290950fbd164bf2ba5d8e1892b340f29a7d0c9286c,PodSandboxId:7d8c627828f6062c2295abd8a940982d787689862473784191f6886ab131433e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:15,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714948216537568797,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-131082,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 897ecc740d330fdc21cae6128629b57d,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.c
ontainer.restartCount: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:864cd85c736e2ce457e1a525229a8d3ca9664b69d7693f3ff59d25492879dafc,PodSandboxId:a6a2bc2fa54f510162aa1fd0b01110c489814d419889d9e2526a810a40d84851,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714948208534727067,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-131082,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35204b9fd48f4ff6cc00f4442205e894,},Annotations:map[string]string{io.kubernetes.container.hash: 77cf3021,io.kubernetes.conta
iner.restartCount: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:323a7ed1b232823638de9954fec12d12c568e620f2ab4dca89733a236cf09834,PodSandboxId:bdc81cb715f155e730e5f49c5f107692af4f03cfbe3d421dd0c6c7b4c30511b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714948041370716497,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-131082,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a105ed86a5fc74d177b03edc1cd6113,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container
.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=07d84739-67cd-47a7-9321-d2604d72cbb3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	5b85663be5c61       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   About a minute ago   Exited              kube-controller-manager   15                  7d8c627828f60       kube-controller-manager-kubernetes-upgrade-131082
	864cd85c736e2       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   About a minute ago   Exited              kube-apiserver            15                  a6a2bc2fa54f5       kube-apiserver-kubernetes-upgrade-131082
	323a7ed1b2328       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   4 minutes ago        Running             kube-scheduler            4                   bdc81cb715f15       kube-scheduler-kubernetes-upgrade-131082
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.198008] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.146337] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.278928] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +4.966195] systemd-fstab-generator[735]: Ignoring "noauto" option for root device
	[  +0.067729] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.056219] systemd-fstab-generator[858]: Ignoring "noauto" option for root device
	[  +3.859565] kauditd_printk_skb: 57 callbacks suppressed
	[  +8.163642] kauditd_printk_skb: 30 callbacks suppressed
	[  +4.477958] systemd-fstab-generator[1262]: Ignoring "noauto" option for root device
	[  +6.662486] kauditd_printk_skb: 15 callbacks suppressed
	[May 5 22:17] kauditd_printk_skb: 80 callbacks suppressed
	[ +11.371195] systemd-fstab-generator[2743]: Ignoring "noauto" option for root device
	[  +0.403581] systemd-fstab-generator[2868]: Ignoring "noauto" option for root device
	[  +0.267843] systemd-fstab-generator[2903]: Ignoring "noauto" option for root device
	[  +0.242862] systemd-fstab-generator[2922]: Ignoring "noauto" option for root device
	[  +0.605042] systemd-fstab-generator[3041]: Ignoring "noauto" option for root device
	[May 5 22:18] systemd-fstab-generator[3348]: Ignoring "noauto" option for root device
	[  +0.121621] kauditd_printk_skb: 203 callbacks suppressed
	[  +2.176860] systemd-fstab-generator[3473]: Ignoring "noauto" option for root device
	[May 5 22:19] kauditd_printk_skb: 81 callbacks suppressed
	[May 5 22:23] systemd-fstab-generator[10102]: Ignoring "noauto" option for root device
	[ +22.555692] kauditd_printk_skb: 79 callbacks suppressed
	[May 5 22:27] systemd-fstab-generator[11996]: Ignoring "noauto" option for root device
	[  +2.355716] kauditd_printk_skb: 39 callbacks suppressed
	[ +21.557699] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> kernel <==
	 22:31:22 up 15 min,  0 users,  load average: 0.04, 0.16, 0.16
	Linux kubernetes-upgrade-131082 5.10.207 #1 SMP Tue Apr 30 22:38:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [864cd85c736e2ce457e1a525229a8d3ca9664b69d7693f3ff59d25492879dafc] <==
	I0505 22:30:08.719819       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0505 22:30:08.972873       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 22:30:08.973937       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0505 22:30:08.974083       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0505 22:30:08.976288       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0505 22:30:08.978226       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0505 22:30:08.978280       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0505 22:30:08.978460       1 instance.go:299] Using reconciler: lease
	W0505 22:30:08.979388       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 22:30:09.974397       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 22:30:09.974427       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 22:30:09.980551       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 22:30:11.409556       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 22:30:11.504387       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 22:30:11.557087       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 22:30:14.148507       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 22:30:14.229527       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 22:30:14.379041       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 22:30:17.809912       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 22:30:17.878968       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 22:30:18.133735       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 22:30:24.204505       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 22:30:24.369495       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 22:30:25.175970       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0505 22:30:28.979750       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [5b85663be5c61802d694fb290950fbd164bf2ba5d8e1892b340f29a7d0c9286c] <==
	I0505 22:30:17.676455       1 serving.go:380] Generated self-signed cert in-memory
	I0505 22:30:17.998829       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0505 22:30:17.998891       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0505 22:30:18.001110       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0505 22:30:18.001276       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0505 22:30:18.001293       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0505 22:30:18.001537       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0505 22:30:38.004373       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.41:8443/healthz\": dial tcp 192.168.39.41:8443: connect: connection refused"
	
	
	==> kube-scheduler [323a7ed1b232823638de9954fec12d12c568e620f2ab4dca89733a236cf09834] <==
	E0505 22:30:52.557079       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.41:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.41:8443: connect: connection refused
	W0505 22:30:56.712575       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.41:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.41:8443: connect: connection refused
	E0505 22:30:56.712805       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.41:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.41:8443: connect: connection refused
	W0505 22:31:06.712609       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.41:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.41:8443: connect: connection refused
	E0505 22:31:06.712839       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.41:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.41:8443: connect: connection refused
	W0505 22:31:07.806366       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.41:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.41:8443: connect: connection refused
	E0505 22:31:07.806541       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.41:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.41:8443: connect: connection refused
	W0505 22:31:09.179436       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.41:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.41:8443: connect: connection refused
	E0505 22:31:09.179480       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.41:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.41:8443: connect: connection refused
	W0505 22:31:10.092483       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.41:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.41:8443: connect: connection refused
	E0505 22:31:10.092565       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.41:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.41:8443: connect: connection refused
	W0505 22:31:12.247171       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.41:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.41:8443: connect: connection refused
	E0505 22:31:12.247230       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.41:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.41:8443: connect: connection refused
	W0505 22:31:12.923177       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.41:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.41:8443: connect: connection refused
	E0505 22:31:12.923231       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.41:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.41:8443: connect: connection refused
	W0505 22:31:13.483589       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.41:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.41:8443: connect: connection refused
	E0505 22:31:13.483746       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.41:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.41:8443: connect: connection refused
	W0505 22:31:16.889316       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.41:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.41:8443: connect: connection refused
	E0505 22:31:16.889420       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.41:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.41:8443: connect: connection refused
	W0505 22:31:17.456332       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.41:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.41:8443: connect: connection refused
	E0505 22:31:17.456378       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.41:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.41:8443: connect: connection refused
	W0505 22:31:18.695112       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.41:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.41:8443: connect: connection refused
	E0505 22:31:18.695217       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.41:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.41:8443: connect: connection refused
	W0505 22:31:22.086476       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.41:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.41:8443: connect: connection refused
	E0505 22:31:22.086519       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.41:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.41:8443: connect: connection refused
	
	
	==> kubelet <==
	May 05 22:31:05 kubernetes-upgrade-131082 kubelet[12003]: W0505 22:31:05.588855   12003 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-131082&limit=500&resourceVersion=0": dial tcp 192.168.39.41:8443: connect: connection refused
	May 05 22:31:05 kubernetes-upgrade-131082 kubelet[12003]: E0505 22:31:05.588965   12003 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-131082&limit=500&resourceVersion=0": dial tcp 192.168.39.41:8443: connect: connection refused
	May 05 22:31:09 kubernetes-upgrade-131082 kubelet[12003]: I0505 22:31:09.517940   12003 scope.go:117] "RemoveContainer" containerID="5b85663be5c61802d694fb290950fbd164bf2ba5d8e1892b340f29a7d0c9286c"
	May 05 22:31:09 kubernetes-upgrade-131082 kubelet[12003]: E0505 22:31:09.518300   12003 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-kubernetes-upgrade-131082_kube-system(897ecc740d330fdc21cae6128629b57d)\"" pod="kube-system/kube-controller-manager-kubernetes-upgrade-131082" podUID="897ecc740d330fdc21cae6128629b57d"
	May 05 22:31:09 kubernetes-upgrade-131082 kubelet[12003]: E0505 22:31:09.606884   12003 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"kubernetes-upgrade-131082\" not found"
	May 05 22:31:10 kubernetes-upgrade-131082 kubelet[12003]: E0505 22:31:10.528443   12003 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_etcd_etcd-kubernetes-upgrade-131082_kube-system_686f0cef4caf339962b3f1941c1896cc_1\" is already in use by 52d6d74b612f7c34a67fda74b883eca96e66be385abcfc7e9d09ad058f345d09. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="5082002dfe1f62210d531d4d677a51fa7e5c6d0af079046fc000b90e0966e3d0"
	May 05 22:31:10 kubernetes-upgrade-131082 kubelet[12003]: E0505 22:31:10.529232   12003 kuberuntime_manager.go:1256] container &Container{Name:etcd,Image:registry.k8s.io/etcd:3.5.12-0,Command:[etcd --advertise-client-urls=https://192.168.39.41:2379 --cert-file=/var/lib/minikube/certs/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/minikube/etcd --experimental-initial-corrupt-check=true --experimental-watch-progress-notify-interval=5s --initial-advertise-peer-urls=https://192.168.39.41:2380 --initial-cluster=kubernetes-upgrade-131082=https://192.168.39.41:2380 --key-file=/var/lib/minikube/certs/etcd/server.key --listen-client-urls=https://127.0.0.1:2379,https://192.168.39.41:2379 --listen-metrics-urls=http://127.0.0.1:2381 --listen-peer-urls=https://192.168.39.41:2380 --name=kubernetes-upgrade-131082 --peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt --peer-client-cert-auth=true --peer-key-file=/var/lib/minikube/certs/etcd/peer.key --peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca
.crt --proxy-refresh-interval=70000 --snapshot-count=10000 --trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{104857600 0} {<nil>} 100Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etcd-data,ReadOnly:false,MountPath:/var/lib/minikube/etcd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etcd-certs,ReadOnly:false,MountPath:/var/lib/minikube/certs/etcd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health?exclude=NOSPACE&serializable=true,Port:{0 2381 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,TerminationGracePeriodS
econds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health?serializable=false,Port:{0 2381 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:24,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod etcd-kubernetes-upgrade-131082_kube-system(686f0cef4caf339962b3f1941c1896cc): CreateContainerError: the container name "k8s_etcd_etcd-kubernetes-upgrade-131082_kube-system_686f0cef4caf339962b3f1941c1896cc_1" is already in use by 52d6d74b612f7c34a67fda74b883eca96e66be385abcfc7e9d09ad058f345d09. You have to remove that container to be able to
reuse that name: that name is already in use
	May 05 22:31:10 kubernetes-upgrade-131082 kubelet[12003]: E0505 22:31:10.529364   12003 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"the container name \\\"k8s_etcd_etcd-kubernetes-upgrade-131082_kube-system_686f0cef4caf339962b3f1941c1896cc_1\\\" is already in use by 52d6d74b612f7c34a67fda74b883eca96e66be385abcfc7e9d09ad058f345d09. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/etcd-kubernetes-upgrade-131082" podUID="686f0cef4caf339962b3f1941c1896cc"
	May 05 22:31:11 kubernetes-upgrade-131082 kubelet[12003]: I0505 22:31:11.005578   12003 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-131082"
	May 05 22:31:11 kubernetes-upgrade-131082 kubelet[12003]: E0505 22:31:11.006726   12003 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.41:8443: connect: connection refused" node="kubernetes-upgrade-131082"
	May 05 22:31:12 kubernetes-upgrade-131082 kubelet[12003]: E0505 22:31:12.000942   12003 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-131082?timeout=10s\": dial tcp 192.168.39.41:8443: connect: connection refused" interval="7s"
	May 05 22:31:12 kubernetes-upgrade-131082 kubelet[12003]: E0505 22:31:12.334537   12003 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.39.41:8443: connect: connection refused" event="&Event{ObjectMeta:{kubernetes-upgrade-131082.17ccb827e2b681a0  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:kubernetes-upgrade-131082,UID:kubernetes-upgrade-131082,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node kubernetes-upgrade-131082 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:kubernetes-upgrade-131082,},FirstTimestamp:2024-05-05 22:27:19.568163232 +0000 UTC m=+0.641144277,LastTimestamp:2024-05-05 22:27:19.568163232 +0000 UTC m=+0.641144277,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,
ReportingController:kubelet,ReportingInstance:kubernetes-upgrade-131082,}"
	May 05 22:31:17 kubernetes-upgrade-131082 kubelet[12003]: I0505 22:31:17.518390   12003 scope.go:117] "RemoveContainer" containerID="864cd85c736e2ce457e1a525229a8d3ca9664b69d7693f3ff59d25492879dafc"
	May 05 22:31:17 kubernetes-upgrade-131082 kubelet[12003]: E0505 22:31:17.518902   12003 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-kubernetes-upgrade-131082_kube-system(35204b9fd48f4ff6cc00f4442205e894)\"" pod="kube-system/kube-apiserver-kubernetes-upgrade-131082" podUID="35204b9fd48f4ff6cc00f4442205e894"
	May 05 22:31:18 kubernetes-upgrade-131082 kubelet[12003]: I0505 22:31:18.009301   12003 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-131082"
	May 05 22:31:18 kubernetes-upgrade-131082 kubelet[12003]: E0505 22:31:18.010502   12003 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.41:8443: connect: connection refused" node="kubernetes-upgrade-131082"
	May 05 22:31:19 kubernetes-upgrade-131082 kubelet[12003]: E0505 22:31:19.002473   12003 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-131082?timeout=10s\": dial tcp 192.168.39.41:8443: connect: connection refused" interval="7s"
	May 05 22:31:19 kubernetes-upgrade-131082 kubelet[12003]: E0505 22:31:19.579164   12003 iptables.go:577] "Could not set up iptables canary" err=<
	May 05 22:31:19 kubernetes-upgrade-131082 kubelet[12003]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 05 22:31:19 kubernetes-upgrade-131082 kubelet[12003]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 05 22:31:19 kubernetes-upgrade-131082 kubelet[12003]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 05 22:31:19 kubernetes-upgrade-131082 kubelet[12003]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 05 22:31:19 kubernetes-upgrade-131082 kubelet[12003]: E0505 22:31:19.607138   12003 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"kubernetes-upgrade-131082\" not found"
	May 05 22:31:22 kubernetes-upgrade-131082 kubelet[12003]: E0505 22:31:22.336246   12003 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.39.41:8443: connect: connection refused" event="&Event{ObjectMeta:{kubernetes-upgrade-131082.17ccb827e2b681a0  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:kubernetes-upgrade-131082,UID:kubernetes-upgrade-131082,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node kubernetes-upgrade-131082 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:kubernetes-upgrade-131082,},FirstTimestamp:2024-05-05 22:27:19.568163232 +0000 UTC m=+0.641144277,LastTimestamp:2024-05-05 22:27:19.568163232 +0000 UTC m=+0.641144277,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,
ReportingController:kubelet,ReportingInstance:kubernetes-upgrade-131082,}"
	May 05 22:31:22 kubernetes-upgrade-131082 kubelet[12003]: E0505 22:31:22.386047   12003 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://control-plane.minikube.internal:8443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 192.168.39.41:8443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-131082 -n kubernetes-upgrade-131082
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-131082 -n kubernetes-upgrade-131082: exit status 2 (252.801529ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "kubernetes-upgrade-131082" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-131082" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-131082
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-131082: (1.14789563s)
--- FAIL: TestKubernetesUpgrade (1237.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (7200.067s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-831483 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)
E0505 22:54:31.829375   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/functional-273789/client.crt: no such file or directory
E0505 22:54:32.040946   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/auto-831483/client.crt: no such file or directory
E0505 22:54:34.778499   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/no-preload-112135/client.crt: no such file or directory
E0505 22:54:52.521520   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/auto-831483/client.crt: no such file or directory
E0505 22:54:54.297456   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/kindnet-831483/client.crt: no such file or directory
E0505 22:54:54.302801   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/kindnet-831483/client.crt: no such file or directory
E0505 22:54:54.313142   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/kindnet-831483/client.crt: no such file or directory
E0505 22:54:54.333459   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/kindnet-831483/client.crt: no such file or directory
E0505 22:54:54.373744   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/kindnet-831483/client.crt: no such file or directory
E0505 22:54:54.454287   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/kindnet-831483/client.crt: no such file or directory
E0505 22:54:54.614910   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/kindnet-831483/client.crt: no such file or directory
E0505 22:54:54.935638   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/kindnet-831483/client.crt: no such file or directory
E0505 22:54:55.576349   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/kindnet-831483/client.crt: no such file or directory
E0505 22:54:56.857244   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/kindnet-831483/client.crt: no such file or directory
E0505 22:54:59.393354   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/old-k8s-version-512320/client.crt: no such file or directory
E0505 22:54:59.417662   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/kindnet-831483/client.crt: no such file or directory
E0505 22:55:04.538259   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/kindnet-831483/client.crt: no such file or directory
E0505 22:55:14.779363   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/kindnet-831483/client.crt: no such file or directory
E0505 22:55:27.078740   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/old-k8s-version-512320/client.crt: no such file or directory
E0505 22:55:33.481915   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/auto-831483/client.crt: no such file or directory
E0505 22:55:35.260287   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/kindnet-831483/client.crt: no such file or directory
E0505 22:56:16.221129   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/kindnet-831483/client.crt: no such file or directory
E0505 22:56:23.627474   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/calico-831483/client.crt: no such file or directory
E0505 22:56:23.632783   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/calico-831483/client.crt: no such file or directory
E0505 22:56:23.643051   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/calico-831483/client.crt: no such file or directory
E0505 22:56:23.663305   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/calico-831483/client.crt: no such file or directory
E0505 22:56:23.703583   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/calico-831483/client.crt: no such file or directory
E0505 22:56:23.783974   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/calico-831483/client.crt: no such file or directory
E0505 22:56:23.944418   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/calico-831483/client.crt: no such file or directory
E0505 22:56:24.265026   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/calico-831483/client.crt: no such file or directory
E0505 22:56:24.905715   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/calico-831483/client.crt: no such file or directory
E0505 22:56:26.185966   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/calico-831483/client.crt: no such file or directory
E0505 22:56:28.746351   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/calico-831483/client.crt: no such file or directory
E0505 22:56:33.867285   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/calico-831483/client.crt: no such file or directory
E0505 22:56:43.518845   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/custom-flannel-831483/client.crt: no such file or directory
E0505 22:56:43.524133   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/custom-flannel-831483/client.crt: no such file or directory
E0505 22:56:43.534465   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/custom-flannel-831483/client.crt: no such file or directory
E0505 22:56:43.554744   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/custom-flannel-831483/client.crt: no such file or directory
E0505 22:56:43.595054   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/custom-flannel-831483/client.crt: no such file or directory
E0505 22:56:43.675417   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/custom-flannel-831483/client.crt: no such file or directory
E0505 22:56:43.835880   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/custom-flannel-831483/client.crt: no such file or directory
E0505 22:56:44.108433   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/calico-831483/client.crt: no such file or directory
E0505 22:56:44.156548   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/custom-flannel-831483/client.crt: no such file or directory
E0505 22:56:44.796984   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/custom-flannel-831483/client.crt: no such file or directory
E0505 22:56:46.077740   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/custom-flannel-831483/client.crt: no such file or directory
E0505 22:56:48.638380   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/custom-flannel-831483/client.crt: no such file or directory
E0505 22:56:51.947917   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/client.crt: no such file or directory
E0505 22:56:53.759075   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/custom-flannel-831483/client.crt: no such file or directory
E0505 22:56:55.402402   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/auto-831483/client.crt: no such file or directory
E0505 22:57:03.999769   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/custom-flannel-831483/client.crt: no such file or directory
E0505 22:57:04.589380   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/calico-831483/client.crt: no such file or directory
panic: test timed out after 2h0m0s
running tests:
	TestStartStop (42m49s)
	TestStartStop/group/default-k8s-diff-port (25m50s)
	TestStartStop/group/default-k8s-diff-port/serial (25m50s)
	TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (3m13s)

                                                
                                                
goroutine 8238 [running]:
testing.(*M).startAlarm.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 3 minutes]:
testing.tRunner.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc000637040, 0xc001449bb0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1695 +0x134
testing.runTests(0xc00059ea80, {0x49f19a0, 0x2b, 0x2b}, {0x26af9c0?, 0xc000602480?, 0x4aadd40?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc00145ae60)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc00145ae60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:133 +0x195

                                                
                                                
goroutine 9 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc00059db80)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 7014 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36da280, 0xc0005b8a80}, 0xc00276c750, 0xc00276c798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36da280, 0xc0005b8a80}, 0xd0?, 0xc00276c750, 0xc00276c798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36da280?, 0xc0005b8a80?}, 0xc0025eb140?, 0xc00273eb00?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00276c7d0?, 0x594064?, 0xc0024d2bb0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 7002
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 920 [chan send, 103 minutes]:
os/exec.(*Cmd).watchCtx(0xc00274c160, 0xc0025ead20)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 919
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 125 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36da280, 0xc0005b8a80}, 0xc002261750, 0xc0022cdf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36da280, 0xc0005b8a80}, 0x20?, 0xc002261750, 0xc002261798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36da280?, 0xc0005b8a80?}, 0x49090a6d65747379?, 0x3a30322035303530?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x687420676e697355?, 0x6420326d766b2065?, 0x6162207265766972?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 153
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 70 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1174 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 69
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1170 +0x171

                                                
                                                
goroutine 594 [select, 109 minutes]:
net/http.(*persistConn).readLoop(0xc0014750e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 560
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 3068 [chan receive, 25 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0027f8f80, 0xc0005b8a80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3063
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 126 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 125
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 6838 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 6837
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 124 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc001552050, 0x2d)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2147720?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002232ea0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001552080)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0014efec0, {0x36b6400, 0xc0014e9020}, 0x1, 0xc0005b8a80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0014efec0, 0x3b9aca00, 0x0, 0x1, 0xc0005b8a80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 153
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 6836 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc002be6810, 0x10)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2147720?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0028051a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc002be6840)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0027e4630, {0x36b6400, 0xc002a98960}, 0x1, 0xc0005b8a80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0027e4630, 0x3b9aca00, 0x0, 0x1, 0xc0005b8a80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 6857
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 6856 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0028052c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 6851
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 5809 [IO wait]:
internal/poll.runtime_pollWait(0x7fc84875ab90, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0029f9880?, 0xc0024a6000?, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0029f9880, {0xc0024a6000, 0x800, 0x800})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_unix.go:164 +0x27a
net.(*netFD).Read(0xc0029f9880, {0xc0024a6000?, 0xc0024d88c0?, 0x2?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc002304ea0, {0xc0024a6000?, 0xc0024a605f?, 0x70?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/net.go:179 +0x45
crypto/tls.(*atLeastReader).Read(0xc0024d1650, {0xc0024a6000?, 0x0?, 0xc0024d1650?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/crypto/tls/conn.go:806 +0x3b
bytes.(*Buffer).ReadFrom(0xc0028649b0, {0x36b6bc0, 0xc0024d1650})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc002864708, {0x7fc8484cec98, 0xc0024d0318}, 0xc0000a7980?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/crypto/tls/conn.go:828 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc002864708, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/crypto/tls/conn.go:626 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/crypto/tls/conn.go:588
crypto/tls.(*Conn).Read(0xc002864708, {0xc00013c000, 0x1000, 0xc0022de700?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/crypto/tls/conn.go:1370 +0x156
bufio.(*Reader).Read(0xc00274c480, {0xc0028092a0, 0x9, 0x49adc00?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x36b5080, 0xc00274c480}, {0xc0028092a0, 0x9, 0x9}, 0x9)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:335 +0x90
io.ReadFull(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc0028092a0, 0x9, 0xa7dc0?}, {0x36b5080?, 0xc00274c480?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.24.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc002809260)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.24.0/http2/frame.go:498 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc0000a7fa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.24.0/http2/transport.go:2429 +0xd8
golang.org/x/net/http2.(*ClientConn).readLoop(0xc002481c80)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.24.0/http2/transport.go:2325 +0x65
created by golang.org/x/net/http2.(*ClientConn).goRun in goroutine 5808
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.24.0/http2/transport.go:369 +0x2d

                                                
                                                
goroutine 821 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36da280, 0xc0005b8a80}, 0xc002250f50, 0xc0022d2f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36da280, 0xc0005b8a80}, 0xc0?, 0xc002250f50, 0xc002250f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36da280?, 0xc0005b8a80?}, 0xc0022184e0?, 0x552c80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc002250fd0?, 0x594064?, 0xc002236400?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 794
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 794 [chan receive, 103 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc002236c40, 0xc0005b8a80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 840
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 596 [select, 109 minutes]:
net/http.(*persistConn).readLoop(0xc00223efc0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 613
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 822 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 821
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 7013 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc00146b6d0, 0xf)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2147720?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0028719e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00146b700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0014b6240, {0x36b6400, 0xc003343200}, 0x1, 0xc0005b8a80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0014b6240, 0x3b9aca00, 0x0, 0x1, 0xc0005b8a80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 7002
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 152 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002232fc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 139
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 153 [chan receive, 116 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001552080, 0xc0005b8a80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 139
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 595 [select, 109 minutes]:
net/http.(*persistConn).writeLoop(0xc0014750e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:2444 +0xf0
created by net/http.(*Transport).dialConn in goroutine 560
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 7789 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc0027f9f50, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2147720?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc003cb1860)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00256cc40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00284ae30, {0x36b6400, 0xc0014e9b30}, 0x1, 0xc0005b8a80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00284ae30, 0x3b9aca00, 0x0, 0x1, 0xc0005b8a80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 7794
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 597 [select, 109 minutes]:
net/http.(*persistConn).writeLoop(0xc00223efc0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:2444 +0xf0
created by net/http.(*Transport).dialConn in goroutine 613
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 7306 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 7305
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 7358 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc003cb1ec0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 7343
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 7659 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 7658
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 7627 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002edac60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 7623
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 7304 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc001553f50, 0xe)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2147720?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00274de60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0024c2000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc002a9b890, {0x36b6400, 0xc0014e8390}, 0x1, 0xc0005b8a80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc002a9b890, 0x3b9aca00, 0x0, 0x1, 0xc0005b8a80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 7237
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 682 [IO wait, 107 minutes]:
internal/poll.runtime_pollWait(0x7fc84875ae78, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0x11?, 0x3fe?, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc0001c3400)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc0001c3400)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc0029086c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc0029086c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc000bfc0f0, {0x36cd0a0, 0xc0029086c0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/server.go:3255 +0x33e
net/http.(*Server).ListenAndServe(0xc000bfc0f0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/server.go:3184 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0x594064?, 0xc0014929c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 679
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 7791 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 7790
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2848 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2847
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 7777 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc003cb1980)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 7773
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 820 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc002236b10, 0x29)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2147720?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0030753e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc002236c40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00280c4d0, {0x36b6400, 0xc002494390}, 0x1, 0xc0005b8a80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00280c4d0, 0x3b9aca00, 0x0, 0x1, 0xc0005b8a80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 794
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 793 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc003075500)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 840
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 942 [chan send, 103 minutes]:
os/exec.(*Cmd).watchCtx(0xc002778dc0, 0xc0027ba600)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 767
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 7653 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002f46f60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 7652
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2852 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002edac00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2817
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2846 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc0027f8550, 0x17)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2147720?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002edaae0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0027f8580)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001452000, {0x36b6400, 0xc0027f2030}, 0x1, 0xc0005b8a80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001452000, 0x3b9aca00, 0x0, 0x1, 0xc0005b8a80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2853
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 7628 [chan receive, 5 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc002237e80, 0xc0005b8a80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 7623
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2267 [chan receive, 44 minutes]:
testing.(*T).Run(0xc001492ea0, {0x26553ab?, 0x552353?}, 0x315b8a0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc001492ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc001492ea0, 0x315b6c8)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 6857 [chan receive, 8 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc002be6840, 0xc0005b8a80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 6851
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 7001 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002871b60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 7000
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 1028 [chan send, 103 minutes]:
os/exec.(*Cmd).watchCtx(0xc002a93600, 0xc002ac2120)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1027
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 7658 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36da280, 0xc0005b8a80}, 0xc002768750, 0xc002768798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36da280, 0xc0005b8a80}, 0xe0?, 0xc002768750, 0xc002768798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36da280?, 0xc0005b8a80?}, 0xc0027687b0?, 0x99de58?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x99de1b?, 0xc002481200?, 0xc00247cdc0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 7654
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2383 [chan receive, 10 minutes]:
testing.tRunner.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc001493040, 0x315b8a0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2267
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 7592 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 7591
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 7790 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36da280, 0xc0005b8a80}, 0xc00295f750, 0xc00295f798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36da280, 0xc0005b8a80}, 0xa0?, 0xc00295f750, 0xc00295f798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36da280?, 0xc0005b8a80?}, 0xc00295f7b0?, 0x99de58?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x594005?, 0xc00357c580?, 0xc0014a1da0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 7794
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 7379 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc003e00750, 0xd)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2147720?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc003cb1da0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc003e00780)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc003c78800, {0x36b6400, 0xc003e80360}, 0x1, 0xc0005b8a80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc003c78800, 0x3b9aca00, 0x0, 0x1, 0xc0005b8a80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 7359
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3360 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc002237510, 0x14)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2147720?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002b1f080)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc002237540)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0024ec160, {0x36b6400, 0xc003178300}, 0x1, 0xc0005b8a80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0024ec160, 0x3b9aca00, 0x0, 0x1, 0xc0005b8a80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3179
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 7015 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 7014
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3361 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36da280, 0xc0005b8a80}, 0xc002260f50, 0xc002260f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36da280, 0xc0005b8a80}, 0x50?, 0xc002260f50, 0xc002260f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36da280?, 0xc0005b8a80?}, 0xc002218ea0?, 0x552c80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc002260fd0?, 0x594064?, 0xc00016a640?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3179
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 7591 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36da280, 0xc0005b8a80}, 0xc00233c750, 0xc00233c798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36da280, 0xc0005b8a80}, 0x0?, 0xc00233c750, 0xc00233c798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36da280?, 0xc0005b8a80?}, 0xc002227040?, 0x552c80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00233c7d0?, 0x594064?, 0xc002c669f0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 7628
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 1131 [select, 103 minutes]:
net/http.(*persistConn).readLoop(0xc002cccd80)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 1129
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 1132 [select, 103 minutes]:
net/http.(*persistConn).writeLoop(0xc002cccd80)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:2444 +0xf0
created by net/http.(*Transport).dialConn in goroutine 1129
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 3080 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc0027f8f50, 0x4)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2147720?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002b1e480)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0027f8f80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0027b73b0, {0x36b6400, 0xc000bf1020}, 0x1, 0xc0005b8a80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0027b73b0, 0x3b9aca00, 0x0, 0x1, 0xc0005b8a80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3068
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 7237 [chan receive, 7 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0024c2000, 0xc0005b8a80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 7235
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3179 [chan receive, 22 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc002237540, 0xc0005b8a80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3177
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 7381 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 7380
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 7657 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc0022fda90, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2147720?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002f46de0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0022fdac0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000593a40, {0x36b6400, 0xc003342810}, 0x1, 0xc0005b8a80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000593a40, 0x3b9aca00, 0x0, 0x1, 0xc0005b8a80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 7654
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 7359 [chan receive, 7 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc003e00780, 0xc0005b8a80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 7343
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 7305 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36da280, 0xc0005b8a80}, 0xc002769f50, 0xc002769f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36da280, 0xc0005b8a80}, 0xe0?, 0xc002769f50, 0xc002769f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36da280?, 0xc0005b8a80?}, 0xc002769fb0?, 0x99de58?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x99de1b?, 0xc002c88d80?, 0xc000002000?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 7237
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2402 [chan receive, 25 minutes]:
testing.(*T).Run(0xc001493a00, {0x265693d?, 0x0?}, 0xc0006a4800)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001493a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc001493a00, 0xc002236680)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2383
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3003 [chan receive, 3 minutes]:
testing.(*T).Run(0xc0022271e0, {0x267b1ad?, 0x60400000004?}, 0xc003cbc080)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0022271e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0022271e0, 0xc0006a4800)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2402
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 7794 [chan receive, 5 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00256cc40, 0xc0005b8a80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 7773
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2847 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36da280, 0xc0005b8a80}, 0xc002260f50, 0xc002272f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36da280, 0xc0005b8a80}, 0x50?, 0xc002260f50, 0xc002260f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36da280?, 0xc0005b8a80?}, 0xc002218ea0?, 0x552c80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc002260fd0?, 0x594064?, 0xc00016a640?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2853
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 7380 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36da280, 0xc0005b8a80}, 0xc0023fe750, 0xc0023fe798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36da280, 0xc0005b8a80}, 0x0?, 0xc0023fe750, 0xc0023fe798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36da280?, 0xc0005b8a80?}, 0xc0023fe7b0?, 0x99de58?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0023fe7d0?, 0x594064?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 7359
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 7590 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc002237e50, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2147720?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002eda9c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc002237e80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc003effd20, {0x36b6400, 0xc0024fb1a0}, 0x1, 0xc0005b8a80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc003effd20, 0x3b9aca00, 0x0, 0x1, 0xc0005b8a80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 7628
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2853 [chan receive, 38 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0027f8580, 0xc0005b8a80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2817
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3082 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3081
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3178 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002b1f1a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3177
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 7002 [chan receive, 8 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00146b700, 0xc0005b8a80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 7000
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 7975 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x36da050, 0xc003c52730}, {0x36cd760, 0xc00256b780}, 0x1, 0x0, 0xc00006fb40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/loop.go:66 +0x1e6
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x36da0c0?, 0xc0005ca070?}, 0x3b9aca00, 0xc00006fd38?, 0x1, 0xc00006fb40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x36da0c0, 0xc0005ca070}, 0xc002218000, {0xc0022e6020, 0x1c}, {0x267b149, 0x14}, {0x2692cda, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAddonAfterStop({0x36da0c0, 0xc0005ca070}, 0xc002218000, {0xc0022e6020, 0x1c}, {0x267e02f?, 0xc000096760?}, {0x552353?, 0x4a26cf?}, {0xc0005d5c00, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:287 +0x13b
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc002218000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc002218000, 0xc003cbc080)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 3003
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3067 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002b1e5a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3063
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 7236 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00270e000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 7235
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3081 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36da280, 0xc0005b8a80}, 0xc00276d750, 0xc00276d798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36da280, 0xc0005b8a80}, 0x80?, 0xc00276d750, 0xc00276d798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36da280?, 0xc0005b8a80?}, 0xc0022f5ba0?, 0x552c80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x594005?, 0xc000840000?, 0xc0014a0180?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3068
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3394 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3361
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 6837 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36da280, 0xc0005b8a80}, 0xc002965f50, 0xc002965f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36da280, 0xc0005b8a80}, 0xd3?, 0xc002965f50, 0xc002965f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36da280?, 0xc0005b8a80?}, 0xc002965fb0?, 0x99de58?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x99de1b?, 0xc000003500?, 0xc0028e8fb0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 6857
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 7654 [chan receive, 5 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0022fdac0, 0xc0005b8a80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 7652
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                    

Test pass (225/275)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 49.79
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.30.0/json-events 13.66
13 TestDownloadOnly/v1.30.0/preload-exists 0
17 TestDownloadOnly/v1.30.0/LogsDuration 0.07
18 TestDownloadOnly/v1.30.0/DeleteAll 0.14
19 TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.58
22 TestOffline 88.85
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 212.24
29 TestAddons/parallel/Registry 18.02
31 TestAddons/parallel/InspektorGadget 17.7
33 TestAddons/parallel/HelmTiller 12.99
35 TestAddons/parallel/CSI 48.57
36 TestAddons/parallel/Headlamp 16.29
37 TestAddons/parallel/CloudSpanner 5.63
38 TestAddons/parallel/LocalPath 59.6
39 TestAddons/parallel/NvidiaDevicePlugin 7.21
40 TestAddons/parallel/Yakd 6.01
44 TestAddons/serial/GCPAuth/Namespaces 0.12
46 TestCertOptions 80.91
47 TestCertExpiration 284.76
49 TestForceSystemdFlag 80.31
50 TestForceSystemdEnv 79.98
52 TestKVMDriverInstallOrUpdate 5.12
56 TestErrorSpam/setup 44.54
57 TestErrorSpam/start 0.37
58 TestErrorSpam/status 0.75
59 TestErrorSpam/pause 1.66
60 TestErrorSpam/unpause 1.74
61 TestErrorSpam/stop 4.98
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 61.06
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 47.79
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.08
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.21
73 TestFunctional/serial/CacheCmd/cache/add_local 2.28
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
75 TestFunctional/serial/CacheCmd/cache/list 0.06
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.25
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.68
78 TestFunctional/serial/CacheCmd/cache/delete 0.12
79 TestFunctional/serial/MinikubeKubectlCmd 0.11
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
81 TestFunctional/serial/ExtraConfig 36.15
82 TestFunctional/serial/ComponentHealth 0.06
83 TestFunctional/serial/LogsCmd 1.61
84 TestFunctional/serial/LogsFileCmd 1.52
85 TestFunctional/serial/InvalidService 4.01
87 TestFunctional/parallel/ConfigCmd 0.4
88 TestFunctional/parallel/DashboardCmd 17.36
89 TestFunctional/parallel/DryRun 0.3
90 TestFunctional/parallel/InternationalLanguage 0.15
91 TestFunctional/parallel/StatusCmd 1.28
95 TestFunctional/parallel/ServiceCmdConnect 8.53
96 TestFunctional/parallel/AddonsCmd 0.16
97 TestFunctional/parallel/PersistentVolumeClaim 52.63
99 TestFunctional/parallel/SSHCmd 0.48
100 TestFunctional/parallel/CpCmd 1.46
101 TestFunctional/parallel/MySQL 37.5
102 TestFunctional/parallel/FileSync 0.21
103 TestFunctional/parallel/CertSync 1.4
107 TestFunctional/parallel/NodeLabels 0.06
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.59
111 TestFunctional/parallel/License 0.68
112 TestFunctional/parallel/ServiceCmd/DeployApp 11.2
113 TestFunctional/parallel/ProfileCmd/profile_not_create 0.37
114 TestFunctional/parallel/ProfileCmd/profile_list 0.33
115 TestFunctional/parallel/ProfileCmd/profile_json_output 0.37
116 TestFunctional/parallel/MountCmd/any-port 10.78
117 TestFunctional/parallel/ServiceCmd/List 0.53
118 TestFunctional/parallel/ServiceCmd/JSONOutput 0.49
119 TestFunctional/parallel/ServiceCmd/HTTPS 0.39
120 TestFunctional/parallel/MountCmd/specific-port 2.06
121 TestFunctional/parallel/ServiceCmd/Format 0.35
122 TestFunctional/parallel/ServiceCmd/URL 0.39
123 TestFunctional/parallel/Version/short 0.07
124 TestFunctional/parallel/Version/components 0.94
125 TestFunctional/parallel/MountCmd/VerifyCleanup 1.51
126 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
127 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
128 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
129 TestFunctional/parallel/ImageCommands/ImageListYaml 0.34
130 TestFunctional/parallel/ImageCommands/ImageBuild 3.81
131 TestFunctional/parallel/ImageCommands/Setup 2.19
132 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
133 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
134 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
135 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.09
136 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 5.78
137 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 16.25
147 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.95
148 TestFunctional/parallel/ImageCommands/ImageRemove 0.55
149 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.98
150 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.35
151 TestFunctional/delete_addon-resizer_images 0.07
152 TestFunctional/delete_my-image_image 0.01
153 TestFunctional/delete_minikube_cached_images 0.01
157 TestMultiControlPlane/serial/StartCluster 259.26
158 TestMultiControlPlane/serial/DeployApp 8.35
159 TestMultiControlPlane/serial/PingHostFromPods 1.39
160 TestMultiControlPlane/serial/AddWorkerNode 76.6
161 TestMultiControlPlane/serial/NodeLabels 0.07
162 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.59
163 TestMultiControlPlane/serial/CopyFile 13.89
165 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.51
167 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.42
170 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.39
172 TestMultiControlPlane/serial/RestartCluster 348.9
173 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.39
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.55
179 TestJSONOutput/start/Command 101.1
180 TestJSONOutput/start/Audit 0
182 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
185 TestJSONOutput/pause/Command 0.79
186 TestJSONOutput/pause/Audit 0
188 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/unpause/Command 0.66
192 TestJSONOutput/unpause/Audit 0
194 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/stop/Command 7.41
198 TestJSONOutput/stop/Audit 0
200 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
202 TestErrorJSONOutput 0.23
207 TestMainNoArgs 0.06
208 TestMinikubeProfile 97.8
211 TestMountStart/serial/StartWithMountFirst 26.81
212 TestMountStart/serial/VerifyMountFirst 0.39
213 TestMountStart/serial/StartWithMountSecond 29.74
214 TestMountStart/serial/VerifyMountSecond 0.39
215 TestMountStart/serial/DeleteFirst 0.89
216 TestMountStart/serial/VerifyMountPostDelete 0.4
217 TestMountStart/serial/Stop 1.35
218 TestMountStart/serial/RestartStopped 24.82
219 TestMountStart/serial/VerifyMountPostStop 0.41
222 TestMultiNode/serial/FreshStart2Nodes 106.26
223 TestMultiNode/serial/DeployApp2Nodes 5.87
224 TestMultiNode/serial/PingHostFrom2Pods 0.9
225 TestMultiNode/serial/AddNode 42.75
226 TestMultiNode/serial/MultiNodeLabels 0.06
227 TestMultiNode/serial/ProfileList 0.23
228 TestMultiNode/serial/CopyFile 7.55
229 TestMultiNode/serial/StopNode 3.18
230 TestMultiNode/serial/StartAfterStop 30.73
232 TestMultiNode/serial/DeleteNode 2.43
234 TestMultiNode/serial/RestartMultiNode 201.08
235 TestMultiNode/serial/ValidateNameConflict 46.35
242 TestScheduledStopUnix 119.49
246 TestRunningBinaryUpgrade 204.19
251 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
252 TestNoKubernetes/serial/StartWithK8s 98.77
253 TestStoppedBinaryUpgrade/Setup 2.58
254 TestStoppedBinaryUpgrade/Upgrade 125.41
255 TestNoKubernetes/serial/StartWithStopK8s 45.61
256 TestNoKubernetes/serial/Start 29.34
257 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
258 TestNoKubernetes/serial/ProfileList 27.29
259 TestNoKubernetes/serial/Stop 1.51
260 TestNoKubernetes/serial/StartNoArgs 23.98
262 TestPause/serial/Start 81.54
263 TestStoppedBinaryUpgrade/MinikubeLogs 0.96
271 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
279 TestNetworkPlugins/group/false 3.37
283 TestPause/serial/SecondStartNoReconfiguration 79.51
284 TestPause/serial/Pause 1.43
285 TestPause/serial/VerifyStatus 0.28
286 TestPause/serial/Unpause 0.83
287 TestPause/serial/PauseAgain 0.98
288 TestPause/serial/DeletePaused 1.04
289 TestPause/serial/VerifyDeletedResources 0.46
327 TestNetworkPlugins/group/auto/Start 120.58
337 TestNetworkPlugins/group/kindnet/Start 62.08
338 TestNetworkPlugins/group/auto/KubeletFlags 0.22
339 TestNetworkPlugins/group/auto/NetCatPod 12.23
340 TestNetworkPlugins/group/auto/DNS 0.17
341 TestNetworkPlugins/group/auto/Localhost 0.18
342 TestNetworkPlugins/group/auto/HairPin 0.95
343 TestNetworkPlugins/group/calico/Start 102.6
344 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
345 TestNetworkPlugins/group/kindnet/KubeletFlags 0.23
347 TestNetworkPlugins/group/kindnet/NetCatPod 11.24
349 TestNetworkPlugins/group/custom-flannel/Start 97.94
350 TestNetworkPlugins/group/kindnet/DNS 0.16
351 TestNetworkPlugins/group/kindnet/Localhost 0.14
352 TestNetworkPlugins/group/kindnet/HairPin 0.14
353 TestNetworkPlugins/group/enable-default-cni/Start 116.69
354 TestNetworkPlugins/group/calico/ControllerPod 6.01
355 TestNetworkPlugins/group/calico/KubeletFlags 0.23
356 TestNetworkPlugins/group/calico/NetCatPod 12.25
357 TestNetworkPlugins/group/calico/DNS 0.2
358 TestNetworkPlugins/group/calico/Localhost 0.15
359 TestNetworkPlugins/group/calico/HairPin 0.14
360 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.24
361 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.24
362 TestNetworkPlugins/group/custom-flannel/DNS 0.17
363 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
364 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
365 TestNetworkPlugins/group/flannel/Start 86.46
366 TestNetworkPlugins/group/bridge/Start 74.16
367 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.28
368 TestNetworkPlugins/group/enable-default-cni/NetCatPod 14.3
369 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
370 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
371 TestNetworkPlugins/group/enable-default-cni/HairPin 0.2
372 TestNetworkPlugins/group/flannel/ControllerPod 6.01
373 TestNetworkPlugins/group/bridge/KubeletFlags 0.23
374 TestNetworkPlugins/group/bridge/NetCatPod 10.24
375 TestNetworkPlugins/group/flannel/KubeletFlags 0.22
376 TestNetworkPlugins/group/flannel/NetCatPod 11.24
377 TestNetworkPlugins/group/bridge/DNS 33.2
378 TestNetworkPlugins/group/flannel/DNS 0.17
379 TestNetworkPlugins/group/flannel/Localhost 0.15
380 TestNetworkPlugins/group/flannel/HairPin 0.14
382 TestNetworkPlugins/group/bridge/Localhost 0.14
x
+
TestDownloadOnly/v1.20.0/json-events (49.79s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-583025 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-583025 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (49.788890068s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (49.79s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-583025
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-583025: exit status 85 (69.71679ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-583025 | jenkins | v1.33.0 | 05 May 24 20:57 UTC |          |
	|         | -p download-only-583025        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/05 20:57:14
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0505 20:57:14.725751   18810 out.go:291] Setting OutFile to fd 1 ...
	I0505 20:57:14.725848   18810 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 20:57:14.725862   18810 out.go:304] Setting ErrFile to fd 2...
	I0505 20:57:14.725866   18810 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 20:57:14.726052   18810 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18602-11466/.minikube/bin
	W0505 20:57:14.726203   18810 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18602-11466/.minikube/config/config.json: open /home/jenkins/minikube-integration/18602-11466/.minikube/config/config.json: no such file or directory
	I0505 20:57:14.726786   18810 out.go:298] Setting JSON to true
	I0505 20:57:14.727650   18810 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2382,"bootTime":1714940253,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0505 20:57:14.727708   18810 start.go:139] virtualization: kvm guest
	I0505 20:57:14.730146   18810 out.go:97] [download-only-583025] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0505 20:57:14.731595   18810 out.go:169] MINIKUBE_LOCATION=18602
	W0505 20:57:14.730262   18810 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18602-11466/.minikube/cache/preloaded-tarball: no such file or directory
	I0505 20:57:14.730302   18810 notify.go:220] Checking for updates...
	I0505 20:57:14.733084   18810 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 20:57:14.734460   18810 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18602-11466/kubeconfig
	I0505 20:57:14.735793   18810 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18602-11466/.minikube
	I0505 20:57:14.736995   18810 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0505 20:57:14.739210   18810 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0505 20:57:14.739434   18810 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 20:57:14.840405   18810 out.go:97] Using the kvm2 driver based on user configuration
	I0505 20:57:14.840434   18810 start.go:297] selected driver: kvm2
	I0505 20:57:14.840451   18810 start.go:901] validating driver "kvm2" against <nil>
	I0505 20:57:14.840788   18810 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 20:57:14.840908   18810 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18602-11466/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0505 20:57:14.855022   18810 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0505 20:57:14.855099   18810 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0505 20:57:14.855577   18810 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0505 20:57:14.855759   18810 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0505 20:57:14.855830   18810 cni.go:84] Creating CNI manager for ""
	I0505 20:57:14.855848   18810 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0505 20:57:14.855858   18810 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0505 20:57:14.855931   18810 start.go:340] cluster config:
	{Name:download-only-583025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-583025 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 20:57:14.856108   18810 iso.go:125] acquiring lock: {Name:mk05a37c5afd8d748706c016c1665e3846f7161e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 20:57:14.858003   18810 out.go:97] Downloading VM boot image ...
	I0505 20:57:14.858047   18810 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso.sha256 -> /home/jenkins/minikube-integration/18602-11466/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso
	I0505 20:57:24.410122   18810 out.go:97] Starting "download-only-583025" primary control-plane node in "download-only-583025" cluster
	I0505 20:57:24.410148   18810 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0505 20:57:24.520359   18810 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0505 20:57:24.520409   18810 cache.go:56] Caching tarball of preloaded images
	I0505 20:57:24.520580   18810 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0505 20:57:24.522516   18810 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0505 20:57:24.522542   18810 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0505 20:57:24.629859   18810 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/18602-11466/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0505 20:57:38.119254   18810 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0505 20:57:38.119350   18810 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18602-11466/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0505 20:57:39.022503   18810 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0505 20:57:39.022872   18810 profile.go:143] Saving config to /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/download-only-583025/config.json ...
	I0505 20:57:39.022906   18810 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/download-only-583025/config.json: {Name:mkf6e15e4cb74742bad295972fbced7595902f1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 20:57:39.023096   18810 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0505 20:57:39.023386   18810 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18602-11466/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-583025 host does not exist
	  To start a cluster, run: "minikube start -p download-only-583025"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-583025
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/json-events (13.66s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-302864 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-302864 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (13.658056966s)
--- PASS: TestDownloadOnly/v1.30.0/json-events (13.66s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-302864
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-302864: exit status 85 (71.105814ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-583025 | jenkins | v1.33.0 | 05 May 24 20:57 UTC |                     |
	|         | -p download-only-583025        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0 | 05 May 24 20:58 UTC | 05 May 24 20:58 UTC |
	| delete  | -p download-only-583025        | download-only-583025 | jenkins | v1.33.0 | 05 May 24 20:58 UTC | 05 May 24 20:58 UTC |
	| start   | -o=json --download-only        | download-only-302864 | jenkins | v1.33.0 | 05 May 24 20:58 UTC |                     |
	|         | -p download-only-302864        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/05 20:58:04
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0505 20:58:04.851794   19136 out.go:291] Setting OutFile to fd 1 ...
	I0505 20:58:04.851905   19136 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 20:58:04.851915   19136 out.go:304] Setting ErrFile to fd 2...
	I0505 20:58:04.851919   19136 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 20:58:04.852111   19136 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18602-11466/.minikube/bin
	I0505 20:58:04.852653   19136 out.go:298] Setting JSON to true
	I0505 20:58:04.853493   19136 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2432,"bootTime":1714940253,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0505 20:58:04.853549   19136 start.go:139] virtualization: kvm guest
	I0505 20:58:04.855755   19136 out.go:97] [download-only-302864] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0505 20:58:04.857131   19136 out.go:169] MINIKUBE_LOCATION=18602
	I0505 20:58:04.855923   19136 notify.go:220] Checking for updates...
	I0505 20:58:04.859825   19136 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 20:58:04.861331   19136 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18602-11466/kubeconfig
	I0505 20:58:04.862585   19136 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18602-11466/.minikube
	I0505 20:58:04.863785   19136 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0505 20:58:04.866354   19136 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0505 20:58:04.866562   19136 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 20:58:04.897395   19136 out.go:97] Using the kvm2 driver based on user configuration
	I0505 20:58:04.897420   19136 start.go:297] selected driver: kvm2
	I0505 20:58:04.897429   19136 start.go:901] validating driver "kvm2" against <nil>
	I0505 20:58:04.897773   19136 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 20:58:04.897863   19136 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18602-11466/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0505 20:58:04.912013   19136 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0505 20:58:04.912053   19136 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0505 20:58:04.912528   19136 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0505 20:58:04.912698   19136 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0505 20:58:04.912773   19136 cni.go:84] Creating CNI manager for ""
	I0505 20:58:04.912787   19136 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0505 20:58:04.912794   19136 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0505 20:58:04.912861   19136 start.go:340] cluster config:
	{Name:download-only-302864 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:download-only-302864 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 20:58:04.913002   19136 iso.go:125] acquiring lock: {Name:mk05a37c5afd8d748706c016c1665e3846f7161e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 20:58:04.914684   19136 out.go:97] Starting "download-only-302864" primary control-plane node in "download-only-302864" cluster
	I0505 20:58:04.914699   19136 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0505 20:58:05.425943   19136 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0505 20:58:05.425988   19136 cache.go:56] Caching tarball of preloaded images
	I0505 20:58:05.426131   19136 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0505 20:58:05.428136   19136 out.go:97] Downloading Kubernetes v1.30.0 preload ...
	I0505 20:58:05.428163   19136 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 ...
	I0505 20:58:05.541635   19136 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:5927bd9d05f26d08fc05540d1d92e5d8 -> /home/jenkins/minikube-integration/18602-11466/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-302864 host does not exist
	  To start a cluster, run: "minikube start -p download-only-302864"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-302864
--- PASS: TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-490333 --alsologtostderr --binary-mirror http://127.0.0.1:42709 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-490333" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-490333
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (88.85s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-080623 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-080623 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m28.01620639s)
helpers_test.go:175: Cleaning up "offline-crio-080623" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-080623
--- PASS: TestOffline (88.85s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-476078
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-476078: exit status 85 (62.135411ms)

                                                
                                                
-- stdout --
	* Profile "addons-476078" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-476078"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-476078
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-476078: exit status 85 (60.931472ms)

                                                
                                                
-- stdout --
	* Profile "addons-476078" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-476078"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (212.24s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-476078 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-476078 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m32.235703357s)
--- PASS: TestAddons/Setup (212.24s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 23.300801ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-l4nvm" [6d3660b5-72f0-4cb8-850d-66e3367f0b2d] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.009123376s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-8z9cj" [2b07c767-5f91-4286-b104-2fd55988d9ad] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.005201631s
addons_test.go:342: (dbg) Run:  kubectl --context addons-476078 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-476078 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-476078 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.144069285s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-476078 ip
2024/05/05 21:02:09 [DEBUG] GET http://192.168.39.102:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-476078 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.02s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (17.7s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-9p6xb" [d6075713-a8af-40a9-acb8-b23074959387] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.005477088s
addons_test.go:843: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-476078
addons_test.go:843: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-476078: (11.689686731s)
--- PASS: TestAddons/parallel/InspektorGadget (17.70s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12.99s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.25403ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-2tngp" [9e6ccc20-fbbd-4495-a454-2e47945c33dc] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.005347404s
addons_test.go:475: (dbg) Run:  kubectl --context addons-476078 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-476078 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.295439568s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-476078 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:492: (dbg) Done: out/minikube-linux-amd64 -p addons-476078 addons disable helm-tiller --alsologtostderr -v=1: (1.683164121s)
--- PASS: TestAddons/parallel/HelmTiller (12.99s)

                                                
                                    
x
+
TestAddons/parallel/CSI (48.57s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 5.236581ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-476078 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476078 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476078 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476078 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-476078 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [9e22b33a-3f17-4964-92a0-4fb4961fbce1] Pending
helpers_test.go:344: "task-pv-pod" [9e22b33a-3f17-4964-92a0-4fb4961fbce1] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [9e22b33a-3f17-4964-92a0-4fb4961fbce1] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 19.004099468s
addons_test.go:586: (dbg) Run:  kubectl --context addons-476078 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-476078 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-476078 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-476078 delete pod task-pv-pod
addons_test.go:602: (dbg) Run:  kubectl --context addons-476078 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-476078 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476078 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476078 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476078 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476078 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476078 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476078 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-476078 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [c193e267-b503-4263-9bea-0c19d3b92689] Pending
helpers_test.go:344: "task-pv-pod-restore" [c193e267-b503-4263-9bea-0c19d3b92689] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [c193e267-b503-4263-9bea-0c19d3b92689] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 10.005086305s
addons_test.go:628: (dbg) Run:  kubectl --context addons-476078 delete pod task-pv-pod-restore
addons_test.go:628: (dbg) Done: kubectl --context addons-476078 delete pod task-pv-pod-restore: (1.734578608s)
addons_test.go:632: (dbg) Run:  kubectl --context addons-476078 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-476078 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-linux-amd64 -p addons-476078 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-linux-amd64 -p addons-476078 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.832124027s)
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-476078 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (48.57s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.29s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-476078 --alsologtostderr -v=1
addons_test.go:826: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-476078 --alsologtostderr -v=1: (1.289150437s)
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7559bf459f-9tvbl" [0a342843-0e7b-4235-8a87-1ab68db8e982] Pending
helpers_test.go:344: "headlamp-7559bf459f-9tvbl" [0a342843-0e7b-4235-8a87-1ab68db8e982] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7559bf459f-9tvbl" [0a342843-0e7b-4235-8a87-1ab68db8e982] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 15.004106147s
--- PASS: TestAddons/parallel/Headlamp (16.29s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.63s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6dc8d859f6-h96bj" [a1234b45-a1fe-4725-8e5b-b386a03a392f] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003901603s
addons_test.go:862: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-476078
--- PASS: TestAddons/parallel/CloudSpanner (5.63s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (59.6s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-476078 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-476078 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476078 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476078 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476078 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476078 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476078 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476078 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476078 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-476078 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [c89f2b36-e901-4cf1-ad1c-29cd230515bb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [c89f2b36-e901-4cf1-ad1c-29cd230515bb] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [c89f2b36-e901-4cf1-ad1c-29cd230515bb] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 8.007648017s
addons_test.go:992: (dbg) Run:  kubectl --context addons-476078 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-linux-amd64 -p addons-476078 ssh "cat /opt/local-path-provisioner/pvc-cbe9cb1d-6e41-4e52-b663-b8efdb599694_default_test-pvc/file1"
addons_test.go:1013: (dbg) Run:  kubectl --context addons-476078 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-476078 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-linux-amd64 -p addons-476078 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1021: (dbg) Done: out/minikube-linux-amd64 -p addons-476078 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.667145669s)
--- PASS: TestAddons/parallel/LocalPath (59.60s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (7.21s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-4s79g" [b7211778-f5aa-4ebe-973a-ac4ee0054143] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004831559s
addons_test.go:1056: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-476078
addons_test.go:1056: (dbg) Done: out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-476078: (1.202317145s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (7.21s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-5ddbf7d777-2nv87" [6020ab74-7313-45e6-8080-4e84b676efe6] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004216441s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-476078 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-476078 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestCertOptions (80.91s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-759256 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-759256 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m19.616764076s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-759256 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-759256 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-759256 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-759256" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-759256
--- PASS: TestCertOptions (80.91s)

                                                
                                    
x
+
TestCertExpiration (284.76s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-239335 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-239335 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m24.590240144s)
E0505 22:18:14.995303   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-239335 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-239335 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (19.372742031s)
helpers_test.go:175: Cleaning up "cert-expiration-239335" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-239335
--- PASS: TestCertExpiration (284.76s)

                                                
                                    
x
+
TestForceSystemdFlag (80.31s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-767669 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E0505 22:14:31.828744   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/functional-273789/client.crt: no such file or directory
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-767669 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m19.087459457s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-767669 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-767669" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-767669
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-767669: (1.006447285s)
--- PASS: TestForceSystemdFlag (80.31s)

                                                
                                    
x
+
TestForceSystemdEnv (79.98s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-722645 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-722645 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m19.181625154s)
helpers_test.go:175: Cleaning up "force-systemd-env-722645" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-722645
--- PASS: TestForceSystemdEnv (79.98s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5.12s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (5.12s)

                                                
                                    
x
+
TestErrorSpam/setup (44.54s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-241781 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-241781 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-241781 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-241781 --driver=kvm2  --container-runtime=crio: (44.540669655s)
--- PASS: TestErrorSpam/setup (44.54s)

                                                
                                    
x
+
TestErrorSpam/start (0.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-241781 --log_dir /tmp/nospam-241781 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-241781 --log_dir /tmp/nospam-241781 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-241781 --log_dir /tmp/nospam-241781 start --dry-run
--- PASS: TestErrorSpam/start (0.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.75s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-241781 --log_dir /tmp/nospam-241781 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-241781 --log_dir /tmp/nospam-241781 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-241781 --log_dir /tmp/nospam-241781 status
--- PASS: TestErrorSpam/status (0.75s)

                                                
                                    
x
+
TestErrorSpam/pause (1.66s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-241781 --log_dir /tmp/nospam-241781 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-241781 --log_dir /tmp/nospam-241781 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-241781 --log_dir /tmp/nospam-241781 pause
--- PASS: TestErrorSpam/pause (1.66s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.74s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-241781 --log_dir /tmp/nospam-241781 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-241781 --log_dir /tmp/nospam-241781 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-241781 --log_dir /tmp/nospam-241781 unpause
--- PASS: TestErrorSpam/unpause (1.74s)

                                                
                                    
x
+
TestErrorSpam/stop (4.98s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-241781 --log_dir /tmp/nospam-241781 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-241781 --log_dir /tmp/nospam-241781 stop: (2.302601392s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-241781 --log_dir /tmp/nospam-241781 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-241781 --log_dir /tmp/nospam-241781 stop: (1.249898433s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-241781 --log_dir /tmp/nospam-241781 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-241781 --log_dir /tmp/nospam-241781 stop: (1.429238331s)
--- PASS: TestErrorSpam/stop (4.98s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18602-11466/.minikube/files/etc/test/nested/copy/18798/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (61.06s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-273789 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0505 21:11:51.947877   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/client.crt: no such file or directory
E0505 21:11:51.953523   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/client.crt: no such file or directory
E0505 21:11:51.963777   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/client.crt: no such file or directory
E0505 21:11:51.984054   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/client.crt: no such file or directory
E0505 21:11:52.024357   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/client.crt: no such file or directory
E0505 21:11:52.104694   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/client.crt: no such file or directory
E0505 21:11:52.265157   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/client.crt: no such file or directory
E0505 21:11:52.585730   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/client.crt: no such file or directory
E0505 21:11:53.226661   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/client.crt: no such file or directory
E0505 21:11:54.507182   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/client.crt: no such file or directory
E0505 21:11:57.068795   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/client.crt: no such file or directory
E0505 21:12:02.189216   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/client.crt: no such file or directory
E0505 21:12:12.429491   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/client.crt: no such file or directory
E0505 21:12:32.910053   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-273789 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m1.056203951s)
--- PASS: TestFunctional/serial/StartWithProxy (61.06s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (47.79s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-273789 --alsologtostderr -v=8
E0505 21:13:13.871649   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-273789 --alsologtostderr -v=8: (47.791092168s)
functional_test.go:659: soft start took 47.791743456s for "functional-273789" cluster.
--- PASS: TestFunctional/serial/SoftStart (47.79s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-273789 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-273789 cache add registry.k8s.io/pause:3.3: (1.196403176s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-273789 cache add registry.k8s.io/pause:latest: (1.050544558s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-273789 /tmp/TestFunctionalserialCacheCmdcacheadd_local3048913307/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 cache add minikube-local-cache-test:functional-273789
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-273789 cache add minikube-local-cache-test:functional-273789: (1.903718944s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 cache delete minikube-local-cache-test:functional-273789
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-273789
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.68s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-273789 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (229.038966ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.68s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 kubectl -- --context functional-273789 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-273789 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.15s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-273789 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-273789 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.144864551s)
functional_test.go:757: restart took 36.145004161s for "functional-273789" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (36.15s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-273789 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.61s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-273789 logs: (1.609542164s)
--- PASS: TestFunctional/serial/LogsCmd (1.61s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.52s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 logs --file /tmp/TestFunctionalserialLogsFileCmd3511200068/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-273789 logs --file /tmp/TestFunctionalserialLogsFileCmd3511200068/001/logs.txt: (1.514137656s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.52s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.01s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-273789 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-273789
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-273789: exit status 115 (298.065928ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.183:30846 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-273789 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.01s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-273789 config get cpus: exit status 14 (86.715349ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-273789 config get cpus: exit status 14 (53.422927ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (17.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-273789 --alsologtostderr -v=1]
E0505 21:14:35.792836   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/client.crt: no such file or directory
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-273789 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 27195: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (17.36s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-273789 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-273789 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (160.081379ms)

                                                
                                                
-- stdout --
	* [functional-273789] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18602
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18602-11466/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18602-11466/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 21:14:34.784885   27094 out.go:291] Setting OutFile to fd 1 ...
	I0505 21:14:34.785057   27094 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 21:14:34.785070   27094 out.go:304] Setting ErrFile to fd 2...
	I0505 21:14:34.785079   27094 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 21:14:34.785396   27094 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18602-11466/.minikube/bin
	I0505 21:14:34.786091   27094 out.go:298] Setting JSON to false
	I0505 21:14:34.787124   27094 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3422,"bootTime":1714940253,"procs":232,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0505 21:14:34.787188   27094 start.go:139] virtualization: kvm guest
	I0505 21:14:34.789192   27094 out.go:177] * [functional-273789] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0505 21:14:34.791348   27094 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 21:14:34.792688   27094 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 21:14:34.791398   27094 notify.go:220] Checking for updates...
	I0505 21:14:34.795166   27094 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18602-11466/kubeconfig
	I0505 21:14:34.796724   27094 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18602-11466/.minikube
	I0505 21:14:34.798359   27094 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0505 21:14:34.799652   27094 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 21:14:34.801390   27094 config.go:182] Loaded profile config "functional-273789": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 21:14:34.802079   27094 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:14:34.802143   27094 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:14:34.818123   27094 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34895
	I0505 21:14:34.818692   27094 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:14:34.819395   27094 main.go:141] libmachine: Using API Version  1
	I0505 21:14:34.819433   27094 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:14:34.819775   27094 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:14:34.820004   27094 main.go:141] libmachine: (functional-273789) Calling .DriverName
	I0505 21:14:34.820329   27094 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 21:14:34.820664   27094 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:14:34.820708   27094 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:14:34.840407   27094 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45441
	I0505 21:14:34.840966   27094 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:14:34.841435   27094 main.go:141] libmachine: Using API Version  1
	I0505 21:14:34.841461   27094 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:14:34.841805   27094 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:14:34.841979   27094 main.go:141] libmachine: (functional-273789) Calling .DriverName
	I0505 21:14:34.875979   27094 out.go:177] * Using the kvm2 driver based on existing profile
	I0505 21:14:34.877059   27094 start.go:297] selected driver: kvm2
	I0505 21:14:34.877075   27094 start.go:901] validating driver "kvm2" against &{Name:functional-273789 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:functional-273789 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 21:14:34.877175   27094 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 21:14:34.879249   27094 out.go:177] 
	W0505 21:14:34.880720   27094 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0505 21:14:34.882145   27094 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-273789 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-273789 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-273789 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (151.56705ms)

                                                
                                                
-- stdout --
	* [functional-273789] minikube v1.33.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18602
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18602-11466/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18602-11466/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 21:14:34.633536   27050 out.go:291] Setting OutFile to fd 1 ...
	I0505 21:14:34.633716   27050 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 21:14:34.633734   27050 out.go:304] Setting ErrFile to fd 2...
	I0505 21:14:34.633742   27050 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 21:14:34.634085   27050 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18602-11466/.minikube/bin
	I0505 21:14:34.634606   27050 out.go:298] Setting JSON to false
	I0505 21:14:34.635471   27050 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3422,"bootTime":1714940253,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0505 21:14:34.635562   27050 start.go:139] virtualization: kvm guest
	I0505 21:14:34.637784   27050 out.go:177] * [functional-273789] minikube v1.33.0 sur Ubuntu 20.04 (kvm/amd64)
	I0505 21:14:34.639311   27050 notify.go:220] Checking for updates...
	I0505 21:14:34.639315   27050 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 21:14:34.640677   27050 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 21:14:34.642451   27050 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18602-11466/kubeconfig
	I0505 21:14:34.643727   27050 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18602-11466/.minikube
	I0505 21:14:34.644947   27050 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0505 21:14:34.646180   27050 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 21:14:34.647778   27050 config.go:182] Loaded profile config "functional-273789": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 21:14:34.648217   27050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:14:34.648299   27050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:14:34.663780   27050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34489
	I0505 21:14:34.664240   27050 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:14:34.664983   27050 main.go:141] libmachine: Using API Version  1
	I0505 21:14:34.665008   27050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:14:34.665443   27050 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:14:34.665634   27050 main.go:141] libmachine: (functional-273789) Calling .DriverName
	I0505 21:14:34.665935   27050 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 21:14:34.666353   27050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:14:34.666405   27050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:14:34.680723   27050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43403
	I0505 21:14:34.681137   27050 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:14:34.681609   27050 main.go:141] libmachine: Using API Version  1
	I0505 21:14:34.681636   27050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:14:34.682263   27050 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:14:34.682455   27050 main.go:141] libmachine: (functional-273789) Calling .DriverName
	I0505 21:14:34.716081   27050 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0505 21:14:34.717295   27050 start.go:297] selected driver: kvm2
	I0505 21:14:34.717313   27050 start.go:901] validating driver "kvm2" against &{Name:functional-273789 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:functional-273789 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 21:14:34.717424   27050 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 21:14:34.719390   27050 out.go:177] 
	W0505 21:14:34.720678   27050 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0505 21:14:34.721986   27050 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-273789 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-273789 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-m56sg" [4556b860-2300-48db-bc30-d8bcc392d5ba] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-m56sg" [4556b860-2300-48db-bc30-d8bcc392d5ba] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.004331163s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.183:31595
functional_test.go:1671: http://192.168.39.183:31595: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-m56sg

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.183:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.183:31595
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.53s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (52.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [589c24c0-e9ae-42b6-938d-d9b8056b2587] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.005798982s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-273789 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-273789 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-273789 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-273789 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-273789 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2bd3b385-ed11-4096-b3c3-97e72ce47622] Pending
helpers_test.go:344: "sp-pod" [2bd3b385-ed11-4096-b3c3-97e72ce47622] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [2bd3b385-ed11-4096-b3c3-97e72ce47622] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 27.012637966s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-273789 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-273789 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-273789 delete -f testdata/storage-provisioner/pod.yaml: (1.803166608s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-273789 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ce8e7f22-88a2-4c38-888f-82c64add7afb] Pending
helpers_test.go:344: "sp-pod" [ce8e7f22-88a2-4c38-888f-82c64add7afb] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ce8e7f22-88a2-4c38-888f-82c64add7afb] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.003760799s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-273789 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (52.63s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 ssh -n functional-273789 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 cp functional-273789:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd30261045/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 ssh -n functional-273789 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 ssh -n functional-273789 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (37.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-273789 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-xttp9" [f08b39ee-7234-46fd-9d0e-c733ae3033e6] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
2024/05/05 21:14:51 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:344: "mysql-64454c8b5c-xttp9" [f08b39ee-7234-46fd-9d0e-c733ae3033e6] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 32.00817738s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-273789 exec mysql-64454c8b5c-xttp9 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-273789 exec mysql-64454c8b5c-xttp9 -- mysql -ppassword -e "show databases;": exit status 1 (190.210908ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-273789 exec mysql-64454c8b5c-xttp9 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-273789 exec mysql-64454c8b5c-xttp9 -- mysql -ppassword -e "show databases;": exit status 1 (168.634274ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-273789 exec mysql-64454c8b5c-xttp9 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-273789 exec mysql-64454c8b5c-xttp9 -- mysql -ppassword -e "show databases;": exit status 1 (159.917831ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-273789 exec mysql-64454c8b5c-xttp9 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (37.50s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/18798/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 ssh "sudo cat /etc/test/nested/copy/18798/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/18798.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 ssh "sudo cat /etc/ssl/certs/18798.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/18798.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 ssh "sudo cat /usr/share/ca-certificates/18798.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/187982.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 ssh "sudo cat /etc/ssl/certs/187982.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/187982.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 ssh "sudo cat /usr/share/ca-certificates/187982.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-273789 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-273789 ssh "sudo systemctl is-active docker": exit status 1 (232.045389ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-273789 ssh "sudo systemctl is-active containerd": exit status 1 (354.061876ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-273789 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-273789 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-j5nql" [8cbbf76e-22ac-4371-b7d1-89a14ba85029] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-j5nql" [8cbbf76e-22ac-4371-b7d1-89a14ba85029] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.0092121s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.20s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "267.859161ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "63.97434ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "306.738416ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "57.888105ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-273789 /tmp/TestFunctionalparallelMountCmdany-port2029527640/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1714943673290378718" to /tmp/TestFunctionalparallelMountCmdany-port2029527640/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1714943673290378718" to /tmp/TestFunctionalparallelMountCmdany-port2029527640/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1714943673290378718" to /tmp/TestFunctionalparallelMountCmdany-port2029527640/001/test-1714943673290378718
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-273789 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (279.099081ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 May  5 21:14 created-by-test
-rw-r--r-- 1 docker docker 24 May  5 21:14 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 May  5 21:14 test-1714943673290378718
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 ssh cat /mount-9p/test-1714943673290378718
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-273789 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [c4979502-533c-4ae0-8cdb-6df07e7f2e25] Pending
helpers_test.go:344: "busybox-mount" [c4979502-533c-4ae0-8cdb-6df07e7f2e25] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [c4979502-533c-4ae0-8cdb-6df07e7f2e25] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [c4979502-533c-4ae0-8cdb-6df07e7f2e25] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.006884289s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-273789 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-273789 /tmp/TestFunctionalparallelMountCmdany-port2029527640/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.78s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 service list -o json
functional_test.go:1490: Took "491.878821ms" to run "out/minikube-linux-amd64 -p functional-273789 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.183:31564
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-273789 /tmp/TestFunctionalparallelMountCmdspecific-port2596803717/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-273789 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (261.975613ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-273789 /tmp/TestFunctionalparallelMountCmdspecific-port2596803717/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-273789 ssh "sudo umount -f /mount-9p": exit status 1 (307.856548ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-273789 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-273789 /tmp/TestFunctionalparallelMountCmdspecific-port2596803717/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.183:31564
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-273789 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1351913327/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-273789 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1351913327/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-273789 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1351913327/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-273789 ssh "findmnt -T" /mount1: exit status 1 (309.606176ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-273789 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-273789 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1351913327/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-273789 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1351913327/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-273789 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1351913327/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-273789 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.0
registry.k8s.io/kube-proxy:v1.30.0
registry.k8s.io/kube-controller-manager:v1.30.0
registry.k8s.io/kube-apiserver:v1.30.0
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-273789
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-273789
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240202-8f1494ea
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-273789 image ls --format short --alsologtostderr:
I0505 21:15:21.665939   29036 out.go:291] Setting OutFile to fd 1 ...
I0505 21:15:21.666206   29036 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0505 21:15:21.666219   29036 out.go:304] Setting ErrFile to fd 2...
I0505 21:15:21.666225   29036 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0505 21:15:21.666520   29036 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18602-11466/.minikube/bin
I0505 21:15:21.667307   29036 config.go:182] Loaded profile config "functional-273789": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0505 21:15:21.667451   29036 config.go:182] Loaded profile config "functional-273789": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0505 21:15:21.668049   29036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0505 21:15:21.668113   29036 main.go:141] libmachine: Launching plugin server for driver kvm2
I0505 21:15:21.683561   29036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40535
I0505 21:15:21.684308   29036 main.go:141] libmachine: () Calling .GetVersion
I0505 21:15:21.684825   29036 main.go:141] libmachine: Using API Version  1
I0505 21:15:21.684846   29036 main.go:141] libmachine: () Calling .SetConfigRaw
I0505 21:15:21.685104   29036 main.go:141] libmachine: () Calling .GetMachineName
I0505 21:15:21.685279   29036 main.go:141] libmachine: (functional-273789) Calling .GetState
I0505 21:15:21.686771   29036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0505 21:15:21.686806   29036 main.go:141] libmachine: Launching plugin server for driver kvm2
I0505 21:15:21.700307   29036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35985
I0505 21:15:21.700731   29036 main.go:141] libmachine: () Calling .GetVersion
I0505 21:15:21.701252   29036 main.go:141] libmachine: Using API Version  1
I0505 21:15:21.701268   29036 main.go:141] libmachine: () Calling .SetConfigRaw
I0505 21:15:21.701600   29036 main.go:141] libmachine: () Calling .GetMachineName
I0505 21:15:21.701810   29036 main.go:141] libmachine: (functional-273789) Calling .DriverName
I0505 21:15:21.702039   29036 ssh_runner.go:195] Run: systemctl --version
I0505 21:15:21.702066   29036 main.go:141] libmachine: (functional-273789) Calling .GetSSHHostname
I0505 21:15:21.705827   29036 main.go:141] libmachine: (functional-273789) DBG | domain functional-273789 has defined MAC address 52:54:00:88:a9:34 in network mk-functional-273789
I0505 21:15:21.705893   29036 main.go:141] libmachine: (functional-273789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:a9:34", ip: ""} in network mk-functional-273789: {Iface:virbr1 ExpiryTime:2024-05-05 22:12:07 +0000 UTC Type:0 Mac:52:54:00:88:a9:34 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:functional-273789 Clientid:01:52:54:00:88:a9:34}
I0505 21:15:21.705952   29036 main.go:141] libmachine: (functional-273789) DBG | domain functional-273789 has defined IP address 192.168.39.183 and MAC address 52:54:00:88:a9:34 in network mk-functional-273789
I0505 21:15:21.706110   29036 main.go:141] libmachine: (functional-273789) Calling .GetSSHPort
I0505 21:15:21.706296   29036 main.go:141] libmachine: (functional-273789) Calling .GetSSHKeyPath
I0505 21:15:21.706438   29036 main.go:141] libmachine: (functional-273789) Calling .GetSSHUsername
I0505 21:15:21.706592   29036 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/functional-273789/id_rsa Username:docker}
I0505 21:15:21.803326   29036 ssh_runner.go:195] Run: sudo crictl images --output json
I0505 21:15:21.868269   29036 main.go:141] libmachine: Making call to close driver server
I0505 21:15:21.868286   29036 main.go:141] libmachine: (functional-273789) Calling .Close
I0505 21:15:21.868553   29036 main.go:141] libmachine: Successfully made call to close driver server
I0505 21:15:21.868577   29036 main.go:141] libmachine: Making call to close connection to plugin binary
I0505 21:15:21.868586   29036 main.go:141] libmachine: Making call to close driver server
I0505 21:15:21.868609   29036 main.go:141] libmachine: (functional-273789) DBG | Closing plugin on server side
I0505 21:15:21.868655   29036 main.go:141] libmachine: (functional-273789) Calling .Close
I0505 21:15:21.868896   29036 main.go:141] libmachine: Successfully made call to close driver server
I0505 21:15:21.868909   29036 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-273789 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/minikube-local-cache-test     | functional-273789  | 20c8766d16324 | 3.33kB |
| registry.k8s.io/kube-proxy              | v1.30.0            | a0bf559e280cf | 85.9MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/nginx                 | latest             | 7383c266ef252 | 192MB  |
| registry.k8s.io/kube-apiserver          | v1.30.0            | c42f13656d0b2 | 118MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/kindest/kindnetd              | v20240202-8f1494ea | 4950bb10b3f87 | 65.3MB |
| gcr.io/google-containers/addon-resizer  | functional-273789  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| registry.k8s.io/kube-scheduler          | v1.30.0            | 259c8277fcbbc | 63MB   |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/kube-controller-manager | v1.30.0            | c7aad43836fa5 | 112MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-273789 image ls --format table --alsologtostderr:
I0505 21:15:22.261176   29157 out.go:291] Setting OutFile to fd 1 ...
I0505 21:15:22.261424   29157 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0505 21:15:22.261435   29157 out.go:304] Setting ErrFile to fd 2...
I0505 21:15:22.261439   29157 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0505 21:15:22.261632   29157 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18602-11466/.minikube/bin
I0505 21:15:22.262192   29157 config.go:182] Loaded profile config "functional-273789": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0505 21:15:22.262283   29157 config.go:182] Loaded profile config "functional-273789": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0505 21:15:22.262623   29157 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0505 21:15:22.262677   29157 main.go:141] libmachine: Launching plugin server for driver kvm2
I0505 21:15:22.276847   29157 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35219
I0505 21:15:22.277329   29157 main.go:141] libmachine: () Calling .GetVersion
I0505 21:15:22.277872   29157 main.go:141] libmachine: Using API Version  1
I0505 21:15:22.277896   29157 main.go:141] libmachine: () Calling .SetConfigRaw
I0505 21:15:22.278217   29157 main.go:141] libmachine: () Calling .GetMachineName
I0505 21:15:22.278414   29157 main.go:141] libmachine: (functional-273789) Calling .GetState
I0505 21:15:22.280444   29157 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0505 21:15:22.280491   29157 main.go:141] libmachine: Launching plugin server for driver kvm2
I0505 21:15:22.293999   29157 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40565
I0505 21:15:22.294355   29157 main.go:141] libmachine: () Calling .GetVersion
I0505 21:15:22.294776   29157 main.go:141] libmachine: Using API Version  1
I0505 21:15:22.294797   29157 main.go:141] libmachine: () Calling .SetConfigRaw
I0505 21:15:22.295141   29157 main.go:141] libmachine: () Calling .GetMachineName
I0505 21:15:22.295362   29157 main.go:141] libmachine: (functional-273789) Calling .DriverName
I0505 21:15:22.295626   29157 ssh_runner.go:195] Run: systemctl --version
I0505 21:15:22.295655   29157 main.go:141] libmachine: (functional-273789) Calling .GetSSHHostname
I0505 21:15:22.298533   29157 main.go:141] libmachine: (functional-273789) DBG | domain functional-273789 has defined MAC address 52:54:00:88:a9:34 in network mk-functional-273789
I0505 21:15:22.298924   29157 main.go:141] libmachine: (functional-273789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:a9:34", ip: ""} in network mk-functional-273789: {Iface:virbr1 ExpiryTime:2024-05-05 22:12:07 +0000 UTC Type:0 Mac:52:54:00:88:a9:34 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:functional-273789 Clientid:01:52:54:00:88:a9:34}
I0505 21:15:22.298967   29157 main.go:141] libmachine: (functional-273789) DBG | domain functional-273789 has defined IP address 192.168.39.183 and MAC address 52:54:00:88:a9:34 in network mk-functional-273789
I0505 21:15:22.299247   29157 main.go:141] libmachine: (functional-273789) Calling .GetSSHPort
I0505 21:15:22.299438   29157 main.go:141] libmachine: (functional-273789) Calling .GetSSHKeyPath
I0505 21:15:22.299626   29157 main.go:141] libmachine: (functional-273789) Calling .GetSSHUsername
I0505 21:15:22.299799   29157 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/functional-273789/id_rsa Username:docker}
I0505 21:15:22.397805   29157 ssh_runner.go:195] Run: sudo crictl images --output json
I0505 21:15:22.472759   29157 main.go:141] libmachine: Making call to close driver server
I0505 21:15:22.472775   29157 main.go:141] libmachine: (functional-273789) Calling .Close
I0505 21:15:22.473055   29157 main.go:141] libmachine: (functional-273789) DBG | Closing plugin on server side
I0505 21:15:22.473064   29157 main.go:141] libmachine: Successfully made call to close driver server
I0505 21:15:22.473080   29157 main.go:141] libmachine: Making call to close connection to plugin binary
I0505 21:15:22.473090   29157 main.go:141] libmachine: Making call to close driver server
I0505 21:15:22.473100   29157 main.go:141] libmachine: (functional-273789) Calling .Close
I0505 21:15:22.473345   29157 main.go:141] libmachine: (functional-273789) DBG | Closing plugin on server side
I0505 21:15:22.473362   29157 main.go:141] libmachine: Successfully made call to close driver server
I0505 21:15:22.473375   29157 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-273789 image ls --format json --alsologtostderr:
[{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b","repoDigests":["registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68","registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210"],"
repoTags":["registry.k8s.io/kube-proxy:v1.30.0"],"size":"85932953"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5","repoDigests":["docker.io/kindest/kindnetd@sha256:
61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988","docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"65291810"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
,"repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-273789"],"size":"34114467"},{"id":"259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced","repoDigests":["registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67","registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.0"],"size":"63026502"},{"id":"e6f1816883972d4be47bd48879a089
19b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0","repoDigests":["registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81","registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.0"],"size":"117609952"},{"id":"c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe","registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450"],"repoTags":["registry.k8s.io/kub
e-controller-manager:v1.30.0"],"size":"112170310"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"7383c266ef252ad70806f3072ee8e63d2a16d1e6bafa6146a2da867fc7c41759","repoDigests":["docker.io/library/nginx@sha256:4d5a113fd08c4dd57aae6870942f8ab4a7d5fd1594b9749c4ae1b505cfd1e7d8","docker.io/library/nginx@sha256:ed6d2c43c8fbcd3eaa44c9dab6d94cb346234476230dc1681227aa72d07181ee"],"repoTags":["docker.io/library/nginx:latest"],"size":"191760844"},{"id":"20c8766d16324be9d3d379f7
d3810161c6d240933b2cd8c07378172cc9a253e6","repoDigests":["localhost/minikube-local-cache-test@sha256:2ba7e43c3314b6e7abcee2c9690f4a4a6a710b8f5595c096269aaf780ca35ed1"],"repoTags":["localhost/minikube-local-cache-test:functional-273789"],"size":"3330"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-273789 image ls --format json --alsologtostderr:
I0505 21:15:22.002331   29094 out.go:291] Setting OutFile to fd 1 ...
I0505 21:15:22.002456   29094 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0505 21:15:22.002467   29094 out.go:304] Setting ErrFile to fd 2...
I0505 21:15:22.002474   29094 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0505 21:15:22.002790   29094 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18602-11466/.minikube/bin
I0505 21:15:22.003614   29094 config.go:182] Loaded profile config "functional-273789": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0505 21:15:22.003760   29094 config.go:182] Loaded profile config "functional-273789": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0505 21:15:22.004346   29094 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0505 21:15:22.004412   29094 main.go:141] libmachine: Launching plugin server for driver kvm2
I0505 21:15:22.019573   29094 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40301
I0505 21:15:22.020117   29094 main.go:141] libmachine: () Calling .GetVersion
I0505 21:15:22.020694   29094 main.go:141] libmachine: Using API Version  1
I0505 21:15:22.020713   29094 main.go:141] libmachine: () Calling .SetConfigRaw
I0505 21:15:22.021046   29094 main.go:141] libmachine: () Calling .GetMachineName
I0505 21:15:22.021284   29094 main.go:141] libmachine: (functional-273789) Calling .GetState
I0505 21:15:22.023274   29094 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0505 21:15:22.023316   29094 main.go:141] libmachine: Launching plugin server for driver kvm2
I0505 21:15:22.042699   29094 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41971
I0505 21:15:22.043069   29094 main.go:141] libmachine: () Calling .GetVersion
I0505 21:15:22.043583   29094 main.go:141] libmachine: Using API Version  1
I0505 21:15:22.043617   29094 main.go:141] libmachine: () Calling .SetConfigRaw
I0505 21:15:22.043979   29094 main.go:141] libmachine: () Calling .GetMachineName
I0505 21:15:22.044198   29094 main.go:141] libmachine: (functional-273789) Calling .DriverName
I0505 21:15:22.044415   29094 ssh_runner.go:195] Run: systemctl --version
I0505 21:15:22.044442   29094 main.go:141] libmachine: (functional-273789) Calling .GetSSHHostname
I0505 21:15:22.047164   29094 main.go:141] libmachine: (functional-273789) DBG | domain functional-273789 has defined MAC address 52:54:00:88:a9:34 in network mk-functional-273789
I0505 21:15:22.047649   29094 main.go:141] libmachine: (functional-273789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:a9:34", ip: ""} in network mk-functional-273789: {Iface:virbr1 ExpiryTime:2024-05-05 22:12:07 +0000 UTC Type:0 Mac:52:54:00:88:a9:34 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:functional-273789 Clientid:01:52:54:00:88:a9:34}
I0505 21:15:22.047681   29094 main.go:141] libmachine: (functional-273789) DBG | domain functional-273789 has defined IP address 192.168.39.183 and MAC address 52:54:00:88:a9:34 in network mk-functional-273789
I0505 21:15:22.047901   29094 main.go:141] libmachine: (functional-273789) Calling .GetSSHPort
I0505 21:15:22.048062   29094 main.go:141] libmachine: (functional-273789) Calling .GetSSHKeyPath
I0505 21:15:22.048230   29094 main.go:141] libmachine: (functional-273789) Calling .GetSSHUsername
I0505 21:15:22.048372   29094 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/functional-273789/id_rsa Username:docker}
I0505 21:15:22.131526   29094 ssh_runner.go:195] Run: sudo crictl images --output json
I0505 21:15:22.200143   29094 main.go:141] libmachine: Making call to close driver server
I0505 21:15:22.200160   29094 main.go:141] libmachine: (functional-273789) Calling .Close
I0505 21:15:22.200401   29094 main.go:141] libmachine: Successfully made call to close driver server
I0505 21:15:22.200418   29094 main.go:141] libmachine: Making call to close connection to plugin binary
I0505 21:15:22.200438   29094 main.go:141] libmachine: Making call to close driver server
I0505 21:15:22.200451   29094 main.go:141] libmachine: (functional-273789) Calling .Close
I0505 21:15:22.200660   29094 main.go:141] libmachine: Successfully made call to close driver server
I0505 21:15:22.200673   29094 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-273789 image ls --format yaml --alsologtostderr:
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81
- registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.0
size: "117609952"
- id: 259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67
- registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.0
size: "63026502"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 20c8766d16324be9d3d379f7d3810161c6d240933b2cd8c07378172cc9a253e6
repoDigests:
- localhost/minikube-local-cache-test@sha256:2ba7e43c3314b6e7abcee2c9690f4a4a6a710b8f5595c096269aaf780ca35ed1
repoTags:
- localhost/minikube-local-cache-test:functional-273789
size: "3330"
- id: c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe
- registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.0
size: "112170310"
- id: a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b
repoDigests:
- registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68
- registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210
repoTags:
- registry.k8s.io/kube-proxy:v1.30.0
size: "85932953"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 7383c266ef252ad70806f3072ee8e63d2a16d1e6bafa6146a2da867fc7c41759
repoDigests:
- docker.io/library/nginx@sha256:4d5a113fd08c4dd57aae6870942f8ab4a7d5fd1594b9749c4ae1b505cfd1e7d8
- docker.io/library/nginx@sha256:ed6d2c43c8fbcd3eaa44c9dab6d94cb346234476230dc1681227aa72d07181ee
repoTags:
- docker.io/library/nginx:latest
size: "191760844"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-273789
size: "34114467"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
- docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "65291810"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-273789 image ls --format yaml --alsologtostderr:
I0505 21:15:21.661652   29037 out.go:291] Setting OutFile to fd 1 ...
I0505 21:15:21.661821   29037 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0505 21:15:21.661834   29037 out.go:304] Setting ErrFile to fd 2...
I0505 21:15:21.661840   29037 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0505 21:15:21.662039   29037 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18602-11466/.minikube/bin
I0505 21:15:21.662563   29037 config.go:182] Loaded profile config "functional-273789": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0505 21:15:21.662661   29037 config.go:182] Loaded profile config "functional-273789": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0505 21:15:21.663040   29037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0505 21:15:21.663095   29037 main.go:141] libmachine: Launching plugin server for driver kvm2
I0505 21:15:21.678104   29037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41799
I0505 21:15:21.678552   29037 main.go:141] libmachine: () Calling .GetVersion
I0505 21:15:21.679256   29037 main.go:141] libmachine: Using API Version  1
I0505 21:15:21.679288   29037 main.go:141] libmachine: () Calling .SetConfigRaw
I0505 21:15:21.679687   29037 main.go:141] libmachine: () Calling .GetMachineName
I0505 21:15:21.679913   29037 main.go:141] libmachine: (functional-273789) Calling .GetState
I0505 21:15:21.681981   29037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0505 21:15:21.682034   29037 main.go:141] libmachine: Launching plugin server for driver kvm2
I0505 21:15:21.701788   29037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46051
I0505 21:15:21.702152   29037 main.go:141] libmachine: () Calling .GetVersion
I0505 21:15:21.702674   29037 main.go:141] libmachine: Using API Version  1
I0505 21:15:21.702701   29037 main.go:141] libmachine: () Calling .SetConfigRaw
I0505 21:15:21.703049   29037 main.go:141] libmachine: () Calling .GetMachineName
I0505 21:15:21.703363   29037 main.go:141] libmachine: (functional-273789) Calling .DriverName
I0505 21:15:21.703660   29037 ssh_runner.go:195] Run: systemctl --version
I0505 21:15:21.703690   29037 main.go:141] libmachine: (functional-273789) Calling .GetSSHHostname
I0505 21:15:21.706888   29037 main.go:141] libmachine: (functional-273789) DBG | domain functional-273789 has defined MAC address 52:54:00:88:a9:34 in network mk-functional-273789
I0505 21:15:21.707182   29037 main.go:141] libmachine: (functional-273789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:a9:34", ip: ""} in network mk-functional-273789: {Iface:virbr1 ExpiryTime:2024-05-05 22:12:07 +0000 UTC Type:0 Mac:52:54:00:88:a9:34 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:functional-273789 Clientid:01:52:54:00:88:a9:34}
I0505 21:15:21.707229   29037 main.go:141] libmachine: (functional-273789) DBG | domain functional-273789 has defined IP address 192.168.39.183 and MAC address 52:54:00:88:a9:34 in network mk-functional-273789
I0505 21:15:21.707447   29037 main.go:141] libmachine: (functional-273789) Calling .GetSSHPort
I0505 21:15:21.711613   29037 main.go:141] libmachine: (functional-273789) Calling .GetSSHKeyPath
I0505 21:15:21.711757   29037 main.go:141] libmachine: (functional-273789) Calling .GetSSHUsername
I0505 21:15:21.711888   29037 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/functional-273789/id_rsa Username:docker}
I0505 21:15:21.817719   29037 ssh_runner.go:195] Run: sudo crictl images --output json
I0505 21:15:21.925731   29037 main.go:141] libmachine: Making call to close driver server
I0505 21:15:21.925747   29037 main.go:141] libmachine: (functional-273789) Calling .Close
I0505 21:15:21.926031   29037 main.go:141] libmachine: Successfully made call to close driver server
I0505 21:15:21.926063   29037 main.go:141] libmachine: Making call to close connection to plugin binary
I0505 21:15:21.926081   29037 main.go:141] libmachine: Making call to close driver server
I0505 21:15:21.926095   29037 main.go:141] libmachine: (functional-273789) Calling .Close
I0505 21:15:21.926037   29037 main.go:141] libmachine: (functional-273789) DBG | Closing plugin on server side
I0505 21:15:21.926346   29037 main.go:141] libmachine: Successfully made call to close driver server
I0505 21:15:21.926362   29037 main.go:141] libmachine: Making call to close connection to plugin binary
I0505 21:15:21.926376   29037 main.go:141] libmachine: (functional-273789) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-273789 ssh pgrep buildkitd: exit status 1 (213.710496ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 image build -t localhost/my-image:functional-273789 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-273789 image build -t localhost/my-image:functional-273789 testdata/build --alsologtostderr: (3.371859244s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-273789 image build -t localhost/my-image:functional-273789 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 0586991035f
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-273789
--> 7abd5716457
Successfully tagged localhost/my-image:functional-273789
7abd571645746f68d9f97fc8440914426b99026fa78fa5fe26fb453845f2966d
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-273789 image build -t localhost/my-image:functional-273789 testdata/build --alsologtostderr:
I0505 21:15:22.151671   29134 out.go:291] Setting OutFile to fd 1 ...
I0505 21:15:22.152001   29134 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0505 21:15:22.152015   29134 out.go:304] Setting ErrFile to fd 2...
I0505 21:15:22.152023   29134 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0505 21:15:22.152404   29134 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18602-11466/.minikube/bin
I0505 21:15:22.153281   29134 config.go:182] Loaded profile config "functional-273789": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0505 21:15:22.154452   29134 config.go:182] Loaded profile config "functional-273789": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0505 21:15:22.155576   29134 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0505 21:15:22.155625   29134 main.go:141] libmachine: Launching plugin server for driver kvm2
I0505 21:15:22.170607   29134 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34357
I0505 21:15:22.171065   29134 main.go:141] libmachine: () Calling .GetVersion
I0505 21:15:22.171725   29134 main.go:141] libmachine: Using API Version  1
I0505 21:15:22.171746   29134 main.go:141] libmachine: () Calling .SetConfigRaw
I0505 21:15:22.172133   29134 main.go:141] libmachine: () Calling .GetMachineName
I0505 21:15:22.172360   29134 main.go:141] libmachine: (functional-273789) Calling .GetState
I0505 21:15:22.174687   29134 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0505 21:15:22.174740   29134 main.go:141] libmachine: Launching plugin server for driver kvm2
I0505 21:15:22.190650   29134 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43125
I0505 21:15:22.191247   29134 main.go:141] libmachine: () Calling .GetVersion
I0505 21:15:22.191815   29134 main.go:141] libmachine: Using API Version  1
I0505 21:15:22.191842   29134 main.go:141] libmachine: () Calling .SetConfigRaw
I0505 21:15:22.192160   29134 main.go:141] libmachine: () Calling .GetMachineName
I0505 21:15:22.192391   29134 main.go:141] libmachine: (functional-273789) Calling .DriverName
I0505 21:15:22.192654   29134 ssh_runner.go:195] Run: systemctl --version
I0505 21:15:22.192687   29134 main.go:141] libmachine: (functional-273789) Calling .GetSSHHostname
I0505 21:15:22.195904   29134 main.go:141] libmachine: (functional-273789) DBG | domain functional-273789 has defined MAC address 52:54:00:88:a9:34 in network mk-functional-273789
I0505 21:15:22.196312   29134 main.go:141] libmachine: (functional-273789) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:a9:34", ip: ""} in network mk-functional-273789: {Iface:virbr1 ExpiryTime:2024-05-05 22:12:07 +0000 UTC Type:0 Mac:52:54:00:88:a9:34 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:functional-273789 Clientid:01:52:54:00:88:a9:34}
I0505 21:15:22.196344   29134 main.go:141] libmachine: (functional-273789) DBG | domain functional-273789 has defined IP address 192.168.39.183 and MAC address 52:54:00:88:a9:34 in network mk-functional-273789
I0505 21:15:22.196529   29134 main.go:141] libmachine: (functional-273789) Calling .GetSSHPort
I0505 21:15:22.196698   29134 main.go:141] libmachine: (functional-273789) Calling .GetSSHKeyPath
I0505 21:15:22.196856   29134 main.go:141] libmachine: (functional-273789) Calling .GetSSHUsername
I0505 21:15:22.197017   29134 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/functional-273789/id_rsa Username:docker}
I0505 21:15:22.298033   29134 build_images.go:161] Building image from path: /tmp/build.2632747402.tar
I0505 21:15:22.298103   29134 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0505 21:15:22.313418   29134 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2632747402.tar
I0505 21:15:22.327928   29134 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2632747402.tar: stat -c "%s %y" /var/lib/minikube/build/build.2632747402.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2632747402.tar': No such file or directory
I0505 21:15:22.327966   29134 ssh_runner.go:362] scp /tmp/build.2632747402.tar --> /var/lib/minikube/build/build.2632747402.tar (3072 bytes)
I0505 21:15:22.370311   29134 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2632747402
I0505 21:15:22.390054   29134 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2632747402 -xf /var/lib/minikube/build/build.2632747402.tar
I0505 21:15:22.410179   29134 crio.go:315] Building image: /var/lib/minikube/build/build.2632747402
I0505 21:15:22.410246   29134 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-273789 /var/lib/minikube/build/build.2632747402 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0505 21:15:25.425632   29134 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-273789 /var/lib/minikube/build/build.2632747402 --cgroup-manager=cgroupfs: (3.015355518s)
I0505 21:15:25.425708   29134 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2632747402
I0505 21:15:25.437850   29134 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2632747402.tar
I0505 21:15:25.453521   29134 build_images.go:217] Built localhost/my-image:functional-273789 from /tmp/build.2632747402.tar
I0505 21:15:25.453556   29134 build_images.go:133] succeeded building to: functional-273789
I0505 21:15:25.453562   29134 build_images.go:134] failed building to: 
I0505 21:15:25.453587   29134 main.go:141] libmachine: Making call to close driver server
I0505 21:15:25.453598   29134 main.go:141] libmachine: (functional-273789) Calling .Close
I0505 21:15:25.453883   29134 main.go:141] libmachine: Successfully made call to close driver server
I0505 21:15:25.453909   29134 main.go:141] libmachine: Making call to close connection to plugin binary
I0505 21:15:25.453920   29134 main.go:141] libmachine: Making call to close driver server
I0505 21:15:25.453924   29134 main.go:141] libmachine: (functional-273789) DBG | Closing plugin on server side
I0505 21:15:25.453930   29134 main.go:141] libmachine: (functional-273789) Calling .Close
I0505 21:15:25.454185   29134 main.go:141] libmachine: Successfully made call to close driver server
I0505 21:15:25.454203   29134 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.17139926s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-273789
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 image load --daemon gcr.io/google-containers/addon-resizer:functional-273789 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-273789 image load --daemon gcr.io/google-containers/addon-resizer:functional-273789 --alsologtostderr: (4.834065888s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (5.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 image load --daemon gcr.io/google-containers/addon-resizer:functional-273789 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-273789 image load --daemon gcr.io/google-containers/addon-resizer:functional-273789 --alsologtostderr: (5.533018628s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (5.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (16.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.076658834s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-273789
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 image load --daemon gcr.io/google-containers/addon-resizer:functional-273789 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-273789 image load --daemon gcr.io/google-containers/addon-resizer:functional-273789 --alsologtostderr: (13.834762279s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (16.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 image save gcr.io/google-containers/addon-resizer:functional-273789 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-273789 image save gcr.io/google-containers/addon-resizer:functional-273789 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.952828171s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 image rm gcr.io/google-containers/addon-resizer:functional-273789 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-273789 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.754827476s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-273789
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-273789 image save --daemon gcr.io/google-containers/addon-resizer:functional-273789 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-273789 image save --daemon gcr.io/google-containers/addon-resizer:functional-273789 --alsologtostderr: (1.31744132s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-273789
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.35s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-273789
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-273789
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-273789
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (259.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-322980 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0505 21:16:51.947293   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/client.crt: no such file or directory
E0505 21:17:19.633593   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/client.crt: no such file or directory
E0505 21:19:31.829542   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/functional-273789/client.crt: no such file or directory
E0505 21:19:31.834837   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/functional-273789/client.crt: no such file or directory
E0505 21:19:31.845171   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/functional-273789/client.crt: no such file or directory
E0505 21:19:31.865499   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/functional-273789/client.crt: no such file or directory
E0505 21:19:31.905885   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/functional-273789/client.crt: no such file or directory
E0505 21:19:31.986772   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/functional-273789/client.crt: no such file or directory
E0505 21:19:32.147280   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/functional-273789/client.crt: no such file or directory
E0505 21:19:32.467611   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/functional-273789/client.crt: no such file or directory
E0505 21:19:33.107849   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/functional-273789/client.crt: no such file or directory
E0505 21:19:34.388169   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/functional-273789/client.crt: no such file or directory
E0505 21:19:36.949264   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/functional-273789/client.crt: no such file or directory
E0505 21:19:42.070094   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/functional-273789/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-322980 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (4m18.577326723s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (259.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-322980 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-322980 -- rollout status deployment/busybox
E0505 21:19:52.311220   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/functional-273789/client.crt: no such file or directory
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-322980 -- rollout status deployment/busybox: (5.902930338s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-322980 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-322980 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-322980 -- exec busybox-fc5497c4f-tbmdd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-322980 -- exec busybox-fc5497c4f-xt9l5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-322980 -- exec busybox-fc5497c4f-xz268 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-322980 -- exec busybox-fc5497c4f-tbmdd -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-322980 -- exec busybox-fc5497c4f-xt9l5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-322980 -- exec busybox-fc5497c4f-xz268 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-322980 -- exec busybox-fc5497c4f-tbmdd -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-322980 -- exec busybox-fc5497c4f-xt9l5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-322980 -- exec busybox-fc5497c4f-xz268 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-322980 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-322980 -- exec busybox-fc5497c4f-tbmdd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-322980 -- exec busybox-fc5497c4f-tbmdd -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-322980 -- exec busybox-fc5497c4f-xt9l5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-322980 -- exec busybox-fc5497c4f-xt9l5 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-322980 -- exec busybox-fc5497c4f-xz268 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-322980 -- exec busybox-fc5497c4f-xz268 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (76.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-322980 -v=7 --alsologtostderr
E0505 21:20:12.791700   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/functional-273789/client.crt: no such file or directory
E0505 21:20:53.752267   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/functional-273789/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-322980 -v=7 --alsologtostderr: (1m15.712802574s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (76.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-322980 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 cp testdata/cp-test.txt ha-322980:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 ssh -n ha-322980 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 cp ha-322980:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3379972730/001/cp-test_ha-322980.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 ssh -n ha-322980 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 cp ha-322980:/home/docker/cp-test.txt ha-322980-m02:/home/docker/cp-test_ha-322980_ha-322980-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 ssh -n ha-322980 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 ssh -n ha-322980-m02 "sudo cat /home/docker/cp-test_ha-322980_ha-322980-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 cp ha-322980:/home/docker/cp-test.txt ha-322980-m03:/home/docker/cp-test_ha-322980_ha-322980-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 ssh -n ha-322980 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 ssh -n ha-322980-m03 "sudo cat /home/docker/cp-test_ha-322980_ha-322980-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 cp ha-322980:/home/docker/cp-test.txt ha-322980-m04:/home/docker/cp-test_ha-322980_ha-322980-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 ssh -n ha-322980 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 ssh -n ha-322980-m04 "sudo cat /home/docker/cp-test_ha-322980_ha-322980-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 cp testdata/cp-test.txt ha-322980-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 ssh -n ha-322980-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 cp ha-322980-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3379972730/001/cp-test_ha-322980-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 ssh -n ha-322980-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 cp ha-322980-m02:/home/docker/cp-test.txt ha-322980:/home/docker/cp-test_ha-322980-m02_ha-322980.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 ssh -n ha-322980-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 ssh -n ha-322980 "sudo cat /home/docker/cp-test_ha-322980-m02_ha-322980.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 cp ha-322980-m02:/home/docker/cp-test.txt ha-322980-m03:/home/docker/cp-test_ha-322980-m02_ha-322980-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 ssh -n ha-322980-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 ssh -n ha-322980-m03 "sudo cat /home/docker/cp-test_ha-322980-m02_ha-322980-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 cp ha-322980-m02:/home/docker/cp-test.txt ha-322980-m04:/home/docker/cp-test_ha-322980-m02_ha-322980-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 ssh -n ha-322980-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 ssh -n ha-322980-m04 "sudo cat /home/docker/cp-test_ha-322980-m02_ha-322980-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 cp testdata/cp-test.txt ha-322980-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 ssh -n ha-322980-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 cp ha-322980-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3379972730/001/cp-test_ha-322980-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 ssh -n ha-322980-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 cp ha-322980-m03:/home/docker/cp-test.txt ha-322980:/home/docker/cp-test_ha-322980-m03_ha-322980.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 ssh -n ha-322980-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 ssh -n ha-322980 "sudo cat /home/docker/cp-test_ha-322980-m03_ha-322980.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 cp ha-322980-m03:/home/docker/cp-test.txt ha-322980-m02:/home/docker/cp-test_ha-322980-m03_ha-322980-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 ssh -n ha-322980-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 ssh -n ha-322980-m02 "sudo cat /home/docker/cp-test_ha-322980-m03_ha-322980-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 cp ha-322980-m03:/home/docker/cp-test.txt ha-322980-m04:/home/docker/cp-test_ha-322980-m03_ha-322980-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 ssh -n ha-322980-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 ssh -n ha-322980-m04 "sudo cat /home/docker/cp-test_ha-322980-m03_ha-322980-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 cp testdata/cp-test.txt ha-322980-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 ssh -n ha-322980-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 cp ha-322980-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3379972730/001/cp-test_ha-322980-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 ssh -n ha-322980-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 cp ha-322980-m04:/home/docker/cp-test.txt ha-322980:/home/docker/cp-test_ha-322980-m04_ha-322980.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 ssh -n ha-322980-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 ssh -n ha-322980 "sudo cat /home/docker/cp-test_ha-322980-m04_ha-322980.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 cp ha-322980-m04:/home/docker/cp-test.txt ha-322980-m02:/home/docker/cp-test_ha-322980-m04_ha-322980-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 ssh -n ha-322980-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 ssh -n ha-322980-m02 "sudo cat /home/docker/cp-test_ha-322980-m04_ha-322980-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 cp ha-322980-m04:/home/docker/cp-test.txt ha-322980-m03:/home/docker/cp-test_ha-322980-m04_ha-322980-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 ssh -n ha-322980-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 ssh -n ha-322980-m03 "sudo cat /home/docker/cp-test_ha-322980-m04_ha-322980-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.509089837s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (348.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-322980 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0505 21:34:31.830124   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/functional-273789/client.crt: no such file or directory
E0505 21:35:54.873723   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/functional-273789/client.crt: no such file or directory
E0505 21:36:51.947600   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-322980 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m48.119545017s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-322980 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (348.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.55s)

                                                
                                    
x
+
TestJSONOutput/start/Command (101.1s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-201019 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0505 21:44:31.830279   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/functional-273789/client.crt: no such file or directory
E0505 21:44:54.994391   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-201019 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m41.095424933s)
--- PASS: TestJSONOutput/start/Command (101.10s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.79s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-201019 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.79s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-201019 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.41s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-201019 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-201019 --output=json --user=testUser: (7.409375056s)
--- PASS: TestJSONOutput/stop/Command (7.41s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-993253 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-993253 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (83.040568ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"76e553f9-8d27-4110-b951-32fe5250711a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-993253] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b02fe4db-bff2-4ece-a288-40e7152c4972","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18602"}}
	{"specversion":"1.0","id":"d29881ef-ee2f-46ad-8675-be3bbded730d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5e7dc531-c0fa-4445-bd81-8c16c644a713","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18602-11466/kubeconfig"}}
	{"specversion":"1.0","id":"c6e0cfec-d131-40d6-80a6-38dc0fe06cd3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18602-11466/.minikube"}}
	{"specversion":"1.0","id":"b716254e-f701-4caf-9663-6177d8bf0751","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"75e452d6-a89e-4fed-bc2a-b656b8d6cf1f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0355d36d-99c8-4bdb-be9a-7a3ab0af3eef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-993253" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-993253
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (97.8s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-597170 --driver=kvm2  --container-runtime=crio
E0505 21:46:51.947545   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-597170 --driver=kvm2  --container-runtime=crio: (50.151812939s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-600289 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-600289 --driver=kvm2  --container-runtime=crio: (44.951554473s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-597170
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-600289
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-600289" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-600289
helpers_test.go:175: Cleaning up "first-597170" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-597170
--- PASS: TestMinikubeProfile (97.80s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (26.81s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-143214 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-143214 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.805802448s)
--- PASS: TestMountStart/serial/StartWithMountFirst (26.81s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-143214 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-143214 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (29.74s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-168338 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-168338 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.735972957s)
--- PASS: TestMountStart/serial/StartWithMountSecond (29.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-168338 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-168338 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.89s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-143214 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-168338 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-168338 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.40s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.35s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-168338
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-168338: (1.353278101s)
--- PASS: TestMountStart/serial/Stop (1.35s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (24.82s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-168338
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-168338: (23.815858495s)
--- PASS: TestMountStart/serial/RestartStopped (24.82s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-168338 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-168338 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (106.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-019621 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0505 21:49:31.828735   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/functional-273789/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-019621 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m45.822701867s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019621 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (106.26s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-019621 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-019621 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-019621 -- rollout status deployment/busybox: (4.130844832s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-019621 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-019621 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-019621 -- exec busybox-fc5497c4f-cl7hp -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-019621 -- exec busybox-fc5497c4f-vwqqq -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-019621 -- exec busybox-fc5497c4f-cl7hp -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-019621 -- exec busybox-fc5497c4f-vwqqq -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-019621 -- exec busybox-fc5497c4f-cl7hp -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-019621 -- exec busybox-fc5497c4f-vwqqq -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.87s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-019621 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-019621 -- exec busybox-fc5497c4f-cl7hp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-019621 -- exec busybox-fc5497c4f-cl7hp -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-019621 -- exec busybox-fc5497c4f-vwqqq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-019621 -- exec busybox-fc5497c4f-vwqqq -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.90s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (42.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-019621 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-019621 -v 3 --alsologtostderr: (42.15393473s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019621 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (42.75s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-019621 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019621 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019621 cp testdata/cp-test.txt multinode-019621:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019621 ssh -n multinode-019621 "sudo cat /home/docker/cp-test.txt"
E0505 21:51:51.947219   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019621 cp multinode-019621:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile971504099/001/cp-test_multinode-019621.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019621 ssh -n multinode-019621 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019621 cp multinode-019621:/home/docker/cp-test.txt multinode-019621-m02:/home/docker/cp-test_multinode-019621_multinode-019621-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019621 ssh -n multinode-019621 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019621 ssh -n multinode-019621-m02 "sudo cat /home/docker/cp-test_multinode-019621_multinode-019621-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019621 cp multinode-019621:/home/docker/cp-test.txt multinode-019621-m03:/home/docker/cp-test_multinode-019621_multinode-019621-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019621 ssh -n multinode-019621 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019621 ssh -n multinode-019621-m03 "sudo cat /home/docker/cp-test_multinode-019621_multinode-019621-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019621 cp testdata/cp-test.txt multinode-019621-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019621 ssh -n multinode-019621-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019621 cp multinode-019621-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile971504099/001/cp-test_multinode-019621-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019621 ssh -n multinode-019621-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019621 cp multinode-019621-m02:/home/docker/cp-test.txt multinode-019621:/home/docker/cp-test_multinode-019621-m02_multinode-019621.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019621 ssh -n multinode-019621-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019621 ssh -n multinode-019621 "sudo cat /home/docker/cp-test_multinode-019621-m02_multinode-019621.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019621 cp multinode-019621-m02:/home/docker/cp-test.txt multinode-019621-m03:/home/docker/cp-test_multinode-019621-m02_multinode-019621-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019621 ssh -n multinode-019621-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019621 ssh -n multinode-019621-m03 "sudo cat /home/docker/cp-test_multinode-019621-m02_multinode-019621-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019621 cp testdata/cp-test.txt multinode-019621-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019621 ssh -n multinode-019621-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019621 cp multinode-019621-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile971504099/001/cp-test_multinode-019621-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019621 ssh -n multinode-019621-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019621 cp multinode-019621-m03:/home/docker/cp-test.txt multinode-019621:/home/docker/cp-test_multinode-019621-m03_multinode-019621.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019621 ssh -n multinode-019621-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019621 ssh -n multinode-019621 "sudo cat /home/docker/cp-test_multinode-019621-m03_multinode-019621.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019621 cp multinode-019621-m03:/home/docker/cp-test.txt multinode-019621-m02:/home/docker/cp-test_multinode-019621-m03_multinode-019621-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019621 ssh -n multinode-019621-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019621 ssh -n multinode-019621-m02 "sudo cat /home/docker/cp-test_multinode-019621-m03_multinode-019621-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.55s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019621 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-019621 node stop m03: (2.30750504s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019621 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-019621 status: exit status 7 (432.666283ms)

                                                
                                                
-- stdout --
	multinode-019621
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-019621-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-019621-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019621 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-019621 status --alsologtostderr: exit status 7 (434.740568ms)

                                                
                                                
-- stdout --
	multinode-019621
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-019621-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-019621-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 21:52:01.421699   47884 out.go:291] Setting OutFile to fd 1 ...
	I0505 21:52:01.421841   47884 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 21:52:01.421853   47884 out.go:304] Setting ErrFile to fd 2...
	I0505 21:52:01.421860   47884 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 21:52:01.422173   47884 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18602-11466/.minikube/bin
	I0505 21:52:01.422399   47884 out.go:298] Setting JSON to false
	I0505 21:52:01.422437   47884 mustload.go:65] Loading cluster: multinode-019621
	I0505 21:52:01.422554   47884 notify.go:220] Checking for updates...
	I0505 21:52:01.422936   47884 config.go:182] Loaded profile config "multinode-019621": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 21:52:01.422959   47884 status.go:255] checking status of multinode-019621 ...
	I0505 21:52:01.423557   47884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:52:01.423622   47884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:52:01.438595   47884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36163
	I0505 21:52:01.439025   47884 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:52:01.439588   47884 main.go:141] libmachine: Using API Version  1
	I0505 21:52:01.439615   47884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:52:01.440007   47884 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:52:01.440238   47884 main.go:141] libmachine: (multinode-019621) Calling .GetState
	I0505 21:52:01.441812   47884 status.go:330] multinode-019621 host status = "Running" (err=<nil>)
	I0505 21:52:01.441828   47884 host.go:66] Checking if "multinode-019621" exists ...
	I0505 21:52:01.442102   47884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:52:01.442146   47884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:52:01.457694   47884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42279
	I0505 21:52:01.458063   47884 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:52:01.458482   47884 main.go:141] libmachine: Using API Version  1
	I0505 21:52:01.458512   47884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:52:01.458774   47884 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:52:01.458939   47884 main.go:141] libmachine: (multinode-019621) Calling .GetIP
	I0505 21:52:01.461184   47884 main.go:141] libmachine: (multinode-019621) DBG | domain multinode-019621 has defined MAC address 52:54:00:ac:32:e4 in network mk-multinode-019621
	I0505 21:52:01.461617   47884 main.go:141] libmachine: (multinode-019621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:32:e4", ip: ""} in network mk-multinode-019621: {Iface:virbr1 ExpiryTime:2024-05-05 22:49:31 +0000 UTC Type:0 Mac:52:54:00:ac:32:e4 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:multinode-019621 Clientid:01:52:54:00:ac:32:e4}
	I0505 21:52:01.461653   47884 main.go:141] libmachine: (multinode-019621) DBG | domain multinode-019621 has defined IP address 192.168.39.30 and MAC address 52:54:00:ac:32:e4 in network mk-multinode-019621
	I0505 21:52:01.461758   47884 host.go:66] Checking if "multinode-019621" exists ...
	I0505 21:52:01.462068   47884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:52:01.462105   47884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:52:01.475929   47884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39689
	I0505 21:52:01.476272   47884 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:52:01.476582   47884 main.go:141] libmachine: Using API Version  1
	I0505 21:52:01.476597   47884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:52:01.476913   47884 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:52:01.477068   47884 main.go:141] libmachine: (multinode-019621) Calling .DriverName
	I0505 21:52:01.477222   47884 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0505 21:52:01.477238   47884 main.go:141] libmachine: (multinode-019621) Calling .GetSSHHostname
	I0505 21:52:01.479970   47884 main.go:141] libmachine: (multinode-019621) DBG | domain multinode-019621 has defined MAC address 52:54:00:ac:32:e4 in network mk-multinode-019621
	I0505 21:52:01.480349   47884 main.go:141] libmachine: (multinode-019621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:32:e4", ip: ""} in network mk-multinode-019621: {Iface:virbr1 ExpiryTime:2024-05-05 22:49:31 +0000 UTC Type:0 Mac:52:54:00:ac:32:e4 Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:multinode-019621 Clientid:01:52:54:00:ac:32:e4}
	I0505 21:52:01.480395   47884 main.go:141] libmachine: (multinode-019621) DBG | domain multinode-019621 has defined IP address 192.168.39.30 and MAC address 52:54:00:ac:32:e4 in network mk-multinode-019621
	I0505 21:52:01.480465   47884 main.go:141] libmachine: (multinode-019621) Calling .GetSSHPort
	I0505 21:52:01.480611   47884 main.go:141] libmachine: (multinode-019621) Calling .GetSSHKeyPath
	I0505 21:52:01.480779   47884 main.go:141] libmachine: (multinode-019621) Calling .GetSSHUsername
	I0505 21:52:01.480912   47884 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/multinode-019621/id_rsa Username:docker}
	I0505 21:52:01.563518   47884 ssh_runner.go:195] Run: systemctl --version
	I0505 21:52:01.570290   47884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 21:52:01.585670   47884 kubeconfig.go:125] found "multinode-019621" server: "https://192.168.39.30:8443"
	I0505 21:52:01.585717   47884 api_server.go:166] Checking apiserver status ...
	I0505 21:52:01.585759   47884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 21:52:01.601270   47884 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1129/cgroup
	W0505 21:52:01.611416   47884 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1129/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0505 21:52:01.611502   47884 ssh_runner.go:195] Run: ls
	I0505 21:52:01.618685   47884 api_server.go:253] Checking apiserver healthz at https://192.168.39.30:8443/healthz ...
	I0505 21:52:01.622657   47884 api_server.go:279] https://192.168.39.30:8443/healthz returned 200:
	ok
	I0505 21:52:01.622683   47884 status.go:422] multinode-019621 apiserver status = Running (err=<nil>)
	I0505 21:52:01.622693   47884 status.go:257] multinode-019621 status: &{Name:multinode-019621 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0505 21:52:01.622708   47884 status.go:255] checking status of multinode-019621-m02 ...
	I0505 21:52:01.623075   47884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:52:01.623100   47884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:52:01.638526   47884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34617
	I0505 21:52:01.638941   47884 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:52:01.639361   47884 main.go:141] libmachine: Using API Version  1
	I0505 21:52:01.639381   47884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:52:01.639679   47884 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:52:01.639858   47884 main.go:141] libmachine: (multinode-019621-m02) Calling .GetState
	I0505 21:52:01.641544   47884 status.go:330] multinode-019621-m02 host status = "Running" (err=<nil>)
	I0505 21:52:01.641559   47884 host.go:66] Checking if "multinode-019621-m02" exists ...
	I0505 21:52:01.641904   47884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:52:01.641941   47884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:52:01.656422   47884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45633
	I0505 21:52:01.656828   47884 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:52:01.657288   47884 main.go:141] libmachine: Using API Version  1
	I0505 21:52:01.657307   47884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:52:01.657597   47884 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:52:01.657767   47884 main.go:141] libmachine: (multinode-019621-m02) Calling .GetIP
	I0505 21:52:01.660171   47884 main.go:141] libmachine: (multinode-019621-m02) DBG | domain multinode-019621-m02 has defined MAC address 52:54:00:ff:f3:fa in network mk-multinode-019621
	I0505 21:52:01.660549   47884 main.go:141] libmachine: (multinode-019621-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f3:fa", ip: ""} in network mk-multinode-019621: {Iface:virbr1 ExpiryTime:2024-05-05 22:50:35 +0000 UTC Type:0 Mac:52:54:00:ff:f3:fa Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:multinode-019621-m02 Clientid:01:52:54:00:ff:f3:fa}
	I0505 21:52:01.660577   47884 main.go:141] libmachine: (multinode-019621-m02) DBG | domain multinode-019621-m02 has defined IP address 192.168.39.242 and MAC address 52:54:00:ff:f3:fa in network mk-multinode-019621
	I0505 21:52:01.660699   47884 host.go:66] Checking if "multinode-019621-m02" exists ...
	I0505 21:52:01.660982   47884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:52:01.661031   47884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:52:01.675032   47884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37865
	I0505 21:52:01.675429   47884 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:52:01.675869   47884 main.go:141] libmachine: Using API Version  1
	I0505 21:52:01.675889   47884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:52:01.676189   47884 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:52:01.676469   47884 main.go:141] libmachine: (multinode-019621-m02) Calling .DriverName
	I0505 21:52:01.676664   47884 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0505 21:52:01.676690   47884 main.go:141] libmachine: (multinode-019621-m02) Calling .GetSSHHostname
	I0505 21:52:01.679387   47884 main.go:141] libmachine: (multinode-019621-m02) DBG | domain multinode-019621-m02 has defined MAC address 52:54:00:ff:f3:fa in network mk-multinode-019621
	I0505 21:52:01.679822   47884 main.go:141] libmachine: (multinode-019621-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:f3:fa", ip: ""} in network mk-multinode-019621: {Iface:virbr1 ExpiryTime:2024-05-05 22:50:35 +0000 UTC Type:0 Mac:52:54:00:ff:f3:fa Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:multinode-019621-m02 Clientid:01:52:54:00:ff:f3:fa}
	I0505 21:52:01.679849   47884 main.go:141] libmachine: (multinode-019621-m02) DBG | domain multinode-019621-m02 has defined IP address 192.168.39.242 and MAC address 52:54:00:ff:f3:fa in network mk-multinode-019621
	I0505 21:52:01.679982   47884 main.go:141] libmachine: (multinode-019621-m02) Calling .GetSSHPort
	I0505 21:52:01.680134   47884 main.go:141] libmachine: (multinode-019621-m02) Calling .GetSSHKeyPath
	I0505 21:52:01.680362   47884 main.go:141] libmachine: (multinode-019621-m02) Calling .GetSSHUsername
	I0505 21:52:01.680528   47884 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18602-11466/.minikube/machines/multinode-019621-m02/id_rsa Username:docker}
	I0505 21:52:01.764773   47884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 21:52:01.780825   47884 status.go:257] multinode-019621-m02 status: &{Name:multinode-019621-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0505 21:52:01.780865   47884 status.go:255] checking status of multinode-019621-m03 ...
	I0505 21:52:01.781283   47884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0505 21:52:01.781320   47884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0505 21:52:01.796577   47884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37939
	I0505 21:52:01.796925   47884 main.go:141] libmachine: () Calling .GetVersion
	I0505 21:52:01.797460   47884 main.go:141] libmachine: Using API Version  1
	I0505 21:52:01.797505   47884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 21:52:01.797807   47884 main.go:141] libmachine: () Calling .GetMachineName
	I0505 21:52:01.797970   47884 main.go:141] libmachine: (multinode-019621-m03) Calling .GetState
	I0505 21:52:01.799379   47884 status.go:330] multinode-019621-m03 host status = "Stopped" (err=<nil>)
	I0505 21:52:01.799392   47884 status.go:343] host is not running, skipping remaining checks
	I0505 21:52:01.799400   47884 status.go:257] multinode-019621-m03 status: &{Name:multinode-019621-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.18s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (30.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019621 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-019621 node start m03 -v=7 --alsologtostderr: (30.065349647s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019621 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (30.73s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019621 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-019621 node delete m03: (1.891906207s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019621 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.43s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (201.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-019621 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0505 22:01:34.994843   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/client.crt: no such file or directory
E0505 22:01:51.947621   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-019621 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m20.531922949s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-019621 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (201.08s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (46.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-019621
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-019621-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-019621-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (75.944154ms)

                                                
                                                
-- stdout --
	* [multinode-019621-m02] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18602
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18602-11466/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18602-11466/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-019621-m02' is duplicated with machine name 'multinode-019621-m02' in profile 'multinode-019621'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-019621-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-019621-m03 --driver=kvm2  --container-runtime=crio: (45.207838491s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-019621
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-019621: exit status 80 (227.073982ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-019621 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-019621-m03 already exists in multinode-019621-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-019621-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (46.35s)

                                                
                                    
x
+
TestScheduledStopUnix (119.49s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-011602 --memory=2048 --driver=kvm2  --container-runtime=crio
E0505 22:09:14.874670   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/functional-273789/client.crt: no such file or directory
E0505 22:09:31.830894   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/functional-273789/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-011602 --memory=2048 --driver=kvm2  --container-runtime=crio: (47.732776365s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-011602 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-011602 -n scheduled-stop-011602
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-011602 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-011602 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-011602 -n scheduled-stop-011602
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-011602
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-011602 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-011602
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-011602: exit status 7 (73.390806ms)

                                                
                                                
-- stdout --
	scheduled-stop-011602
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-011602 -n scheduled-stop-011602
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-011602 -n scheduled-stop-011602: exit status 7 (75.641814ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-011602" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-011602
--- PASS: TestScheduledStopUnix (119.49s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (204.19s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3158105425 start -p running-upgrade-125284 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0505 22:11:51.947809   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3158105425 start -p running-upgrade-125284 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m13.796274561s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-125284 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-125284 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m6.59432852s)
helpers_test.go:175: Cleaning up "running-upgrade-125284" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-125284
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-125284: (1.161962322s)
--- PASS: TestRunningBinaryUpgrade (204.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-108412 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-108412 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (104.552847ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-108412] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18602
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18602-11466/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18602-11466/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (98.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-108412 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-108412 --driver=kvm2  --container-runtime=crio: (1m38.493665945s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-108412 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (98.77s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.58s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.58s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (125.41s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.4248296741 start -p stopped-upgrade-494424 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.4248296741 start -p stopped-upgrade-494424 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m13.824379568s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.4248296741 -p stopped-upgrade-494424 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.4248296741 -p stopped-upgrade-494424 stop: (2.12085786s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-494424 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-494424 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (49.467944916s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (125.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (45.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-108412 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-108412 --no-kubernetes --driver=kvm2  --container-runtime=crio: (44.30978711s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-108412 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-108412 status -o json: exit status 2 (288.313364ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-108412","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-108412
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-108412: (1.015569213s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (45.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (29.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-108412 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-108412 --no-kubernetes --driver=kvm2  --container-runtime=crio: (29.343212164s)
--- PASS: TestNoKubernetes/serial/Start (29.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-108412 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-108412 "sudo systemctl is-active --quiet service kubelet": exit status 1 (205.848674ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (27.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (13.72059348s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (13.565234712s)
--- PASS: TestNoKubernetes/serial/ProfileList (27.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-108412
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-108412: (1.508582995s)
--- PASS: TestNoKubernetes/serial/Stop (1.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (23.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-108412 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-108412 --driver=kvm2  --container-runtime=crio: (23.975333496s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (23.98s)

                                                
                                    
x
+
TestPause/serial/Start (81.54s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-111649 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-111649 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m21.536430042s)
--- PASS: TestPause/serial/Start (81.54s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.96s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-494424
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-108412 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-108412 "sudo systemctl is-active --quiet service kubelet": exit status 1 (205.986237ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-831483 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-831483 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (114.143938ms)

                                                
                                                
-- stdout --
	* [false-831483] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18602
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18602-11466/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18602-11466/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 22:14:37.908128   59009 out.go:291] Setting OutFile to fd 1 ...
	I0505 22:14:37.908249   59009 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 22:14:37.908258   59009 out.go:304] Setting ErrFile to fd 2...
	I0505 22:14:37.908262   59009 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 22:14:37.908467   59009 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18602-11466/.minikube/bin
	I0505 22:14:37.909032   59009 out.go:298] Setting JSON to false
	I0505 22:14:37.909913   59009 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7025,"bootTime":1714940253,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0505 22:14:37.909972   59009 start.go:139] virtualization: kvm guest
	I0505 22:14:37.912287   59009 out.go:177] * [false-831483] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0505 22:14:37.913705   59009 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 22:14:37.913711   59009 notify.go:220] Checking for updates...
	I0505 22:14:37.915053   59009 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 22:14:37.916487   59009 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18602-11466/kubeconfig
	I0505 22:14:37.917745   59009 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18602-11466/.minikube
	I0505 22:14:37.919090   59009 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0505 22:14:37.920263   59009 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 22:14:37.922033   59009 config.go:182] Loaded profile config "force-systemd-flag-767669": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 22:14:37.922144   59009 config.go:182] Loaded profile config "kubernetes-upgrade-131082": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0505 22:14:37.922260   59009 config.go:182] Loaded profile config "pause-111649": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0505 22:14:37.922357   59009 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 22:14:37.956883   59009 out.go:177] * Using the kvm2 driver based on user configuration
	I0505 22:14:37.958180   59009 start.go:297] selected driver: kvm2
	I0505 22:14:37.958200   59009 start.go:901] validating driver "kvm2" against <nil>
	I0505 22:14:37.958212   59009 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 22:14:37.960265   59009 out.go:177] 
	W0505 22:14:37.961750   59009 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0505 22:14:37.963101   59009 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-831483 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-831483

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-831483

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-831483

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-831483

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-831483

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-831483

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-831483

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-831483

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-831483

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-831483

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831483"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831483"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831483"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-831483

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831483"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831483"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-831483" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-831483" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-831483" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-831483" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-831483" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-831483" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-831483" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-831483" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831483"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831483"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831483"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831483"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831483"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-831483" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-831483" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-831483" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831483"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831483"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831483"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831483"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831483"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-831483

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831483"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831483"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831483"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831483"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831483"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831483"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831483"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831483"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831483"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831483"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831483"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831483"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831483"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831483"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831483"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831483"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831483"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-831483"

                                                
                                                
----------------------- debugLogs end: false-831483 [took: 3.089964004s] --------------------------------
helpers_test.go:175: Cleaning up "false-831483" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-831483
--- PASS: TestNetworkPlugins/group/false (3.37s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (79.51s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-111649 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-111649 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m19.490963768s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (79.51s)

                                                
                                    
x
+
TestPause/serial/Pause (1.43s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-111649 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-111649 --alsologtostderr -v=5: (1.433541524s)
--- PASS: TestPause/serial/Pause (1.43s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.28s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-111649 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-111649 --output=json --layout=cluster: exit status 2 (281.99409ms)

                                                
                                                
-- stdout --
	{"Name":"pause-111649","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-111649","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.28s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.83s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-111649 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.83s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.98s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-111649 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.98s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.04s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-111649 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-111649 --alsologtostderr -v=5: (1.037716222s)
--- PASS: TestPause/serial/DeletePaused (1.04s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.46s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (120.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-831483 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-831483 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (2m0.579429272s)
--- PASS: TestNetworkPlugins/group/auto/Start (120.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (62.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-831483 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
E0505 22:49:07.093740   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/no-preload-112135/client.crt: no such file or directory
E0505 22:49:07.099042   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/no-preload-112135/client.crt: no such file or directory
E0505 22:49:07.109380   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/no-preload-112135/client.crt: no such file or directory
E0505 22:49:07.129747   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/no-preload-112135/client.crt: no such file or directory
E0505 22:49:07.170084   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/no-preload-112135/client.crt: no such file or directory
E0505 22:49:07.250416   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/no-preload-112135/client.crt: no such file or directory
E0505 22:49:07.410818   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/no-preload-112135/client.crt: no such file or directory
E0505 22:49:07.731772   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/no-preload-112135/client.crt: no such file or directory
E0505 22:49:08.372488   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/no-preload-112135/client.crt: no such file or directory
E0505 22:49:09.653248   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/no-preload-112135/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-831483 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m2.075822294s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (62.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-831483 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-831483 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-zlvdd" [26e32ecd-2229-458c-b19b-2f48660ee695] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0505 22:49:12.213980   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/no-preload-112135/client.crt: no such file or directory
E0505 22:49:17.335131   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/no-preload-112135/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-zlvdd" [26e32ecd-2229-458c-b19b-2f48660ee695] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.004148527s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-831483 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-831483 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-831483 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (102.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-831483 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
E0505 22:49:48.056007   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/no-preload-112135/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-831483 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m42.602478061s)
--- PASS: TestNetworkPlugins/group/calico/Start (102.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-r52rl" [2780f656-d254-4aa9-bc8f-273904e24142] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.006496331s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-831483 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-831483 replace --force -f testdata/netcat-deployment.yaml
E0505 22:50:00.672718   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/old-k8s-version-512320/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-d4nq6" [b5eb3d8d-a5dc-426f-976e-e7983d6f0c4f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-d4nq6" [b5eb3d8d-a5dc-426f-976e-e7983d6f0c4f] Running
E0505 22:50:09.634827   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/old-k8s-version-512320/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004123093s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (97.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-831483 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-831483 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m37.943397751s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (97.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-831483 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-831483 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-831483 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (116.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-831483 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E0505 22:50:40.356135   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/old-k8s-version-512320/client.crt: no such file or directory
E0505 22:51:21.317113   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/old-k8s-version-512320/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-831483 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m56.685964566s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (116.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-hflvt" [8b6d2376-12c3-40fd-bca0-3f24be9ed3bc] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006372803s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-831483 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-831483 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-cv9hb" [6ad11e28-b691-4c35-9624-6c91ca2c1522] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0505 22:51:34.996766   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-cv9hb" [6ad11e28-b691-4c35-9624-6c91ca2c1522] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004965065s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-831483 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-831483 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-831483 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-831483 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-831483 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-hjsnm" [fdb025c9-56e1-4975-a0a6-679e00960ae6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-hjsnm" [fdb025c9-56e1-4975-a0a6-679e00960ae6] Running
E0505 22:51:50.937866   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/no-preload-112135/client.crt: no such file or directory
E0505 22:51:51.947716   18798 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18602-11466/.minikube/profiles/addons-476078/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004650603s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-831483 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-831483 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-831483 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (86.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-831483 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-831483 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m26.455223942s)
--- PASS: TestNetworkPlugins/group/flannel/Start (86.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (74.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-831483 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-831483 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m14.162310884s)
--- PASS: TestNetworkPlugins/group/bridge/Start (74.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-831483 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-831483 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-h95nr" [1216ad52-7599-4aaa-9b29-3392c52075b4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-h95nr" [1216ad52-7599-4aaa-9b29-3392c52075b4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 14.004702486s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-831483 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-831483 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-831483 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-8xnsv" [8d08b800-4807-48fe-ade4-9d5439075cd0] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004559594s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-831483 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-831483 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-kcw4z" [5a6bb786-494d-4e01-ad39-f391eaa7dcd0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-kcw4z" [5a6bb786-494d-4e01-ad39-f391eaa7dcd0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004710447s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-831483 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-831483 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-cjttn" [54bb1e5d-888c-42f6-9c64-9f1fdf2e21bd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-cjttn" [54bb1e5d-888c-42f6-9c64-9f1fdf2e21bd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004375765s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (33.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-831483 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-831483 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.14849026s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context bridge-831483 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-831483 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.185270309s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context bridge-831483 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (33.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-831483 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-831483 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-831483 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-831483 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    

Test skip (36/275)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.30.0/cached-images 0
15 TestDownloadOnly/v1.30.0/binaries 0
16 TestDownloadOnly/v1.30.0/kubectl 0
20 TestDownloadOnlyKic 0
34 TestAddons/parallel/Olm 0
41 TestAddons/parallel/Volcano 0
48 TestDockerFlags 0
51 TestDockerEnvContainerd 0
53 TestHyperKitDriverInstallOrUpdate 0
54 TestHyperkitDriverSkipUpgrade 0
105 TestFunctional/parallel/DockerEnv 0
106 TestFunctional/parallel/PodmanEnv 0
139 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
140 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
141 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
142 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
143 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
144 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
145 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
146 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
154 TestGvisorAddon 0
176 TestImageBuild 0
203 TestKicCustomNetwork 0
204 TestKicExistingNetwork 0
205 TestKicCustomSubnet 0
206 TestKicStaticIP 0
238 TestChangeNoneUser 0
241 TestScheduledStopWindows 0
243 TestSkaffold 0
245 TestInsufficientStorage 0
249 TestMissingContainerUpgrade 0
274 TestNetworkPlugins/group/kubenet 3.21
282 TestNetworkPlugins/group/cilium 3.71
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:871: skipping: crio not supported
--- SKIP: TestAddons/parallel/Volcano (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-831483 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-831483

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-831483

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-831483

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-831483

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-831483

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-831483

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-831483

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-831483

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-831483

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-831483

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831483"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831483"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831483"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-831483

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831483"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831483"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-831483" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-831483" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-831483" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-831483" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-831483" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-831483" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-831483" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-831483" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831483"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831483"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831483"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831483"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831483"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-831483" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-831483" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-831483" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831483"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831483"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831483"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831483"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831483"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-831483

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831483"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831483"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831483"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831483"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831483"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831483"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831483"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831483"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831483"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831483"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831483"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831483"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831483"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831483"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831483"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831483"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831483"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-831483"

                                                
                                                
----------------------- debugLogs end: kubenet-831483 [took: 3.060243095s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-831483" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-831483
--- SKIP: TestNetworkPlugins/group/kubenet (3.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-831483 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-831483

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-831483

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-831483

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-831483

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-831483

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-831483

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-831483

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-831483

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-831483

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-831483

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831483"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831483"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831483"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-831483

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831483"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831483"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-831483" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-831483" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-831483" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-831483" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-831483" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-831483" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-831483" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-831483" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831483"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831483"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831483"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831483"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831483"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-831483

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-831483

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-831483" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-831483" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-831483

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-831483

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-831483" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-831483" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-831483" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-831483" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-831483" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831483"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831483"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831483"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831483"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831483"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-831483

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831483"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831483"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831483"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831483"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831483"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831483"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831483"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831483"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831483"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831483"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831483"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831483"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831483"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831483"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831483"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831483"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831483"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-831483" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831483"

                                                
                                                
----------------------- debugLogs end: cilium-831483 [took: 3.563733765s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-831483" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-831483
--- SKIP: TestNetworkPlugins/group/cilium (3.71s)

                                                
                                    
Copied to clipboard